What We Punish Instead of Teaching About AI
And designing for dignity in an AI world. Daniel & Pat Belliveau, CEO of GambitCo, chat about building a real AI business and how the education system is failing our kids.
đ Hey friends, Arianne here, editor and producer of Artificial Insights. Welcome to a special #TBT edition of TL;DL where we go back into our archives and revisit past interviews.
Today, weâre looking back exactly one year, to our Season 2 opener with Pat Belliveau, Managing Partner at GambitCo.
Letâs goooo! đ
The conversation was recorded during a different AI moment, before âagentsâ became everyoneâs favorite word, and while most teams were still trying to decide whether AI was a toy or a threat.
Pat was already living in the part of the curve weâre all catching up to now: building something real, putting it in peopleâs hands, learning fast, and staying close to the problem.
đď¸ From the Archives: Pat Belliveau
Pat is an entrepreneur and technologist whose AI journey started before the rest of us had language for it. He built AskEllyn an AI companion based on a breast cancer survivorâs memoir, so patients and families could access shared lived experience without being dropped into the chaos of unmoderated groups.
He did it with no coding background, learning by trial, YouTube, and ChatGPT. The result is now used in over 15 countries. That one project became the seed of GambitCo, and a repeatable way of building tools that people actually trust enough to use.
Pat is unusually grounded for an AI optimist and builder, and also refreshingly focused on building a real business.
ââŚif you canât package something to sell to a client⌠you donât have a business⌠because⌠the most dangerous thing you could do with AI is raise a bunch of money to go heads down, build something for a year⌠âitâs, like, the number one way to light money on fire.â
Unlike most AI tech founders, he talks about refusing VC money to stay focused on building what customers will pay for and calls out the game that most VC-funded tech startups and entrepreneurs playâthey like to say itâs looking for the elusive Product-Market Fit⌠but at the end of the day, really, itâs looking for their next successful round.
The incentives are just totally misaligned.
đĄ Core Insight: Build for âDignity,â Not Just âSafetyâ
This interview had so many fun stories that I still refer to today. No joke.
One of my favorites was when Pat told of a story about a failure mode they couldnât acceptâif someone asked AskEllyn, âCan I take Tylenol?â, the system would sometimes answer:
âYes, I took Tylenol on my journey.â
Which was true.
But, that âyesâ was doing too much work. It sounded too much like medical advice, even if itâs meant as a personal anecdote.
And Gambit learned pretty fast that you canât just write a giant list of things not to sayâyou just canât and wonât be able to account for every single edge case that might come up.
You simply canât put your arms around all the ways a person might ask something risky.
So they shifted the target from âWhat must it never say?â to âWhat does a dignified answer sound like?â
ââŚwe got very good at⌠instead of worrying about the things that it shouldnât say, worried about what is a dignified answerâŚâ
Now thatâs a weird engineering constraint.
âNo medical adviceâ is still there, but âdignityâ becomes the design standard for tone, framing, and how the system handles ambiguity.
Most AI teams treat safety as a checklist, then ship something that is technically compliant and emotionally clumsy. Users then bounce, or worse, they feel unseen. Patâs insight is that for certain domains, trust is about how the tool behaves when the user is vulnerableâfacts matter, but delivery is also important.
He gave a few really cool examples that show what âdignified answersâ unlocked:
Hospitals wanted to put AskEllyn QR codes in waiting rooms once they saw it was supporting the human side without stepping into medical advice.
Husbands used it to learn how to support a spouse through chemo. No body thought husbands would be an ideal user, but there you go.
A friend used AskEllyn to build a gift basket that actually matched what chemo can do to taste and appetite, and it created a moment where everyone felt understood.
If youâre building AI that interacts with people at an emotionally loaded moment, âhelpfulâ is just not enough. The output needs to feel like it came from someone who understands the stakes.
Listen to the full interview if you want to hear how Patrick got there, and how his team thinks about product, business, and responsibility as one system.
đ Key Clip: Education Is Pointing in the Wrong Direction
The bonus conversation takes a sharp turn into education.
Pat zeroes in on AI detection tools being used in schools and universities and calls out the downstream harm of treating students as suspects.
âYou are now putting kids on trial having used AI⌠and that thing is wrong.â
To make the case concrete, he describes running the same text through an AI checker, then asking a model to lightly rewrite it with small mistakes.
â100 percent plagiarized⌠then 0 percent likelihood of AI. Thanks.â
So, now we have to ask, what is the education system is actually teaching?
Instead of helping students learn how to use tools they will be expected to use at work, schools are penalizing them for touching the very technology employers already assume is table stakes.
âIf you donât use AI, employers are not going to hire you. Thatâs, like, a real thing when you graduate.â
Accusing students with tools that openly admit they are unreliable is reckless, especially when layered onto an existing mental health crisis.
âWe canât do this. Itâs not good for them long term, and itâs definitely not good for them near term.â
He closes the thread with a deeper reframing: if intelligence is still being measured primarily as information retention, we have already lost the plot.
âIf we measure intelligence on information retention⌠then weâre just dumber than an AI.â
The sad thing is, a whole year later⌠and as far as we can tell, nothing much has really changed. We are still hearing stories from students whose marks and futures are in jeopardy because their teachers are being forced to use AI plagiarism checkers that are unreliable.
It begs the question:
Are we educating for the world students are entering, or just protecting systems that no longer accurately describe reality?
đ§ One Year Later
Pat ends this conversation with a line that feels even more relevant now:
âYou have not missed the boat⌠youâre probably still early⌠AI is not very old and AI is not going anywhere.â
A year later, that reads less like encouragement and more like a challenge. If we are still early, then we still get to choose the habits we build into our products, our teams, and our definition of âgood enoughâ in AI.
The question is, are we being intentional about it?
As always, thanks for listening. đ
P.S. Artificial Insights is a podcast on how AI is changing work, lifeâand us. Every other Friday, Daniel Manary sits down with leaders, thinkers, and builders in AI to have candid conversations on what theyâre doing right now and how they think the world will change. If youâre a podcast listener, weâd love for you to check us out!
P.P.S. If you liked the episode, please subscribe, share, and or give the show 5 stars. Every little bit helps! â
Artificial Insights is a part of the Manary.haus family â¤ď¸ Come say hi!




