The Unglamorous Problem at the Heart of Great AI Products
What makes an AI product last? Bijan Vaez, CEO of Merchkit, chats with Daniel about why durable AI products come from solving messy data and real workflow problems, not chasing the latest feature.
š· Hi hi! Arianne here, editor and producer of Artificial Insights. Welcome to a special #TBT edition of TL;DL where we go back into our archives and revisit past interviews.
This one is from Season 2, Episode 4, with Bijan Vaez, CEO of Merchkit, three-time founder, and former CTO who has spent years building SaaS products through more than one technology shift.
š Letās go shopping!
This conversation aged really well and it was fun to revisit. You know, like good wine. š·
When Daniel Manary first spoke with Bijan, the AI world was still in one of those especially dizzying phases where everything felt possible and very little felt settled. New capabilities were showing up constantly and everyone was trying to figure out where the real opportunities were.
No one could really tell.
That was the reason this podcast started at all: to cut through the noise and distinguish between hype and lasting impact.
So, letās dig into it: what does actually last?
š From the Archives: Bijan Vaez on Why We Shouldnāt Get Stuck On What AI āCanā Do
āWhen youāre just focused on the feature that AI enables, that could be wiped out very quickly.ā
Bijan is the founder and CEO of Merchkit, where he works on AI-powered product catalog management and enrichment for retailers, marketplaces, and enterprise brands.
I loved how honest he was about the path that got him there. He didnāt start with a neat vertical AI thesis already in hand, he started where a lot of curious builders started: with the frontier itself.
AI inspired him to get back into being a technical person again, not just a builder of companies. That was cool.
He talked about getting pulled into GPT-3 early, experimenting with Stable Diffusion, and even trying to dig into the deeper research side of the field. Then he went to CVPR, asked researchers all his hardest questions, and came away with a very clarifying realization: this space was much bigger, and much less settled, than heād assumed.
āAnd I was like, oh no ā if they donāt know, I have no chance of figuring this out. I just realized how big and how nascent this whole space is.
That realization changed the direction of his work.
Instead of trying to move the frontier forward himself, he stepped back and asked how the new capabilities could be applied to a business problem he could understand deeply enough to solve.
š” One Core Insight: The Hard Part Is Usually Upstream
Turns out, the problem was pretty unglamorous.
Most people start with, āWhat is the coolest thing AI can do?ā
Bijan instead asked why customers kept getting disappointing results from AI tools they were already excited to use.
And the answer, again and again, was⦠dun dun dunā¦
Data.
Teams were dumping huge amounts of messy product information into ChatGPT and hoping it would somehow transform that into useful output. Instead, they got results that created more cleanup work. The model was, essentially, being asked to perform magic on bad inputs.
I suppose, that was the promise, right?
Bijan and his team realized the real work that needed to be done: helping retailers ingest, clean, contextualize, and enrich product data, so the downstream workflows became tractable. Once the data quality improved, a lot of the āAI magicā people wanted actually started to become possible.
This episode was a reminder that lot of AI conversations, even today, stay at the level of capability. Can the model do structured output? Can it reason? Can it summarize? Can it generate? The answer is usually āYes, butā¦ā
If the workflow is weak and the source data is unreliable, the capability itself, no matter how magical, wonāt help you.
Anwar talked about how AI companies run into that trouble all the time in the very first episode of the podcast: real-world data is messy⦠and it turns out AI can be quite picky about its data.
š One Key Clip: Big Markets Give You Room to Learn
āā⦠if thereās a big enough market, you can keep tinkering, and thereāre enough customers to throw a new idea at.
In the bonus episode, Bijan talked about something that sounds obvious once its been said: when a technology shift is this large, itās actually hard to know where to start. There are too many opportunities, too many possible wedges, too many things that might workā¦
So, best to start in a very large market.
That way, if the first idea misses, thereās still enough room to keep testing, adjusting, and learning from the same kind of customer without immediately running out of runway⦠or actual customers.
He started in AI product photography. That was the first swing.
It led him into conversations with customers. Those conversations exposed mismatched expectations, thin budgets, and a much more painful operational problem hiding underneath the original use case. And thatās how he ended up much deeper in retail data infrastructure and enrichment.
That still feels like very good advice for founders building in AI now:
Bijan didnāt get overly attached to the first story he told himself about where the value wasāhe used his first product idea as a listening device. And then, through relentless focus on trying to discover what the jobs that needed to be done were, he found a bigger, juicier problem to solve.
š One Year Later
The surface-level feature race feels like itās only intensifiedāmore and more capabilities are getting bundled into the major platforms. Tools that once looked differentiated get crowded out very quickly.
In that environment, Bijanās warning is more relevant now than ever: if your whole business is a thin wrapper around a feature, youāre in a fragile position.
Iād argue, you technically always have been⦠but itās just a lot more obvious now. The market is less forgiving.
But, if your business is built around a real workflow, real context, and real operational pain, you have something sturdier to work with.
That might not be as glamorous, and a lot of times, it makes the work much harder. You have to understand how people actually do their jobs and put in the work to understand the data they rely on, the edge cases they run into, the review steps they canāt skip, the trust thresholds they care about, and the parts of the process where human judgment is still doing indispensable work.
All that is way slower (and way less fun) than shipping a vibe-coded demo.
But, itās also much closer to building something useful.
This conversation reminded me that āApplied AIā is often a much āhumblerā discipline than the phrase makes it sound. Instead of flashy, frontier tech, it means going deep into a business context, solving old problems in old systems with newly available tools.
It means caring whether a team can trust the output enough to actually use it.
Thatās still so relevant now, Pat Belliveau and Daniel talked about it in the most recent episode of the podcast!
(Totally organic connection, I promise. It just stood out to me.)
As always, thanks for listening. š
P.S. Artificial Insights is a podcast on how AI is changing work, lifeāand us. Every other Friday, Daniel Manary sits down with leaders, thinkers, and builders in AI to have candid conversations on what theyāre doing right now and how they think the world will change. If youāre a podcast listener, weād love for you to check us out!
P.P.S. If you liked the episode, please subscribe, share, and or give the show 5 stars. Every little bit helps! ā
Artificial Insights is a part of the Manary.haus family ā¤ļø Come say hi!




