Is Artificial Support is Better Than No Support?
What happens in-between? A conversation about the unseen days that shape mental health outcomes, and why AI changes that equation. Daniel Manary chats with David Proulx, CAIO of HoloMD.
š¤ Hey friends, Arianne here! Editor and producer of Artificial Insights, the podcast. Welcome! This is TL;DL where I write about what stood out to me in each episode, share some food for thought, and do a roundup of what happened and whatās next for those of us who prefer to read.
Allons-y!
Artificial support is better than no support at all.
Itās a thought thatās stayed with me since editing this episode. Especially given the context. At first blush, it sounds wrong.
Doesnāt it?
But, in mental health care especially, waiting is often the unspoken default.
Waiting weeks or months for an appointment. Waiting for symptoms to escalate enough to ācount.ā Waiting alone, without context, feedback, or reassurance that what you are experiencing even makes sense.
In the worst case, help only comes when itās too late.
So, maybe, AI has a place here after all?
šļø Just Interviewed: David Proulx on AI that bridges gaps in mental health care
āIt would be impossible to replace a psychiatrist with the models that we have now⦠Also, to put the health of someone into the hands of a machine⦠People are not ready for that⦠Society is not ready.
David Proulx has spent most of his career building at the edge of technology. He launched one of the first mobile e-commerce sites before the iPhone existed. He built and scaled a social network for mothers to more than 600,000 downloads across Canada and the U.S. And when that company collapsed during COVID, he locked himself in a room and taught himself AI from the ground up.
Today, he is Chief AI Officer at HoloMD, a psychiatric support platform designed to work alongside clinicians, not replace them. The system checks in with patients daily, remembers past conversations, and surfaces patterns that would otherwise be invisible in occasional appointments. But it is never positioned as a substitute for care. Every interaction is reviewed. Every escalation pathway is tested. And responsibility remains firmly with the psychiatrist.
š” One Core Insight: AI sheds light on what happens in the in-between
One of the constraints of mental health care is that it relies on snapshots.
A patient shows up every few weeks or months, and a clinician has to make sense of everything that happened in between based on memory, mood the day of the visit, and whatever feels safe to say in the moment. Important details inevitably get lost. Patterns flatten into anecdotes.
HoloMD checks in daily. It remembers past conversations, notices when mood drops for weeks at a time and can surface why, note that a pet died, that medication was skipped, and that sleep was bad. These are not dramatic revelations, but they are the texture of real life, and they rarely appear clearly in a single appointment.
Medication adherence is a good example. Patients often stop taking meds when they feel better, not realizing that the medication is the reason they feel better. Nightly check-ins make that pattern visible early, before things spiral.
This changes the role of the psychiatrist. Instead of relying on a single moment in time, they walk into an appointment with a longitudinal view of how someone has actually been living.
And this is only now possible because of AI.
š One Key Clip: Why banning AI is like banning electricity
In the bonus clip, David compares restrictive AI policy to the moment when the Ottoman Empire refused to adopt the Gutenberg press. At the time, the decision was framed as protection, but the outcome was centuries of lost ground⦠all because others learned how to use it while they did not.
The truth is, AI is closer to infrastructure than to a single product.
David calls it the new electricity.
Something that reshapes everything it touches. Trying to ban it outright does not remove risk. It just guarantees that innovation happens somewhere else.
The clip is short, but it captures a tension many leaders are feeling right now. How do you slow down recklessness without freezing progress entirely?
š„” One Takeaway: Hallucinations are not a bug. They are a design consequence.
We often talk about hallucinations as something models will eventually grow out of. As if more data, bigger context windows, or better tuning will make the problem disappear.
Daniel pointed out on LinkedIn that hallucinations are not just a temporary limitation. They are a consequence of how these systems relate to truth in the first place.
Language models memorize words and patterns. They return those words probabilistically. Even when trained on truthful material, they are not verifying statements against reality. They are producing language that sounds right.
What makes HoloMD interesting is that it does not ask the AI to discover truth. It defines truth explicitly. The system operates inside a constrained world where reality is what the patient has actually said, session after session. Statements can be checked against that record. Scope is limited. Context is controlled.
This suggests that progress here doesnāt necessarily come from smarter models, but from building systems that acknowledge what AI is and is not doing.
š„ Up Next: A Season Wrap-Up
This conversation with David closes out the season.
Over the past stretch of episodes, weāve talked with builders, researchers, and operators working in very different domains, but wrestling with many of the same questions. Where does AI actually help? Where does it introduce new risks? And what does it look like to use these tools without giving up responsibility or judgment?
Instead of jumping straight into the next interview cycle, weāre going to pause and take stock.
The next newsletter will be a special season wrap-up where Iāll attempt to shed light on what appears to be an emerging pattern. Stay tuned!
⨠Why restraint keeps coming up
As we close this season, I keep coming back to a pattern that shows up in almost every conversation.
The most thoughtful builders are not asking how far AI can go. They are asking where it should stop, slow down, or stay deliberately boring.
Davidās work is a good example of that because it is specific about what is at stake and who carries responsibility.
As always, thanks for listening! š
P.S. Artificial Insights is a podcast on how AI is changing work, lifeāand us. Every other Friday, Daniel Manary sits down with leaders, thinkers, and builders in AI to have candid conversations on what theyāre doing right now and how they think the world will change. If youāre a podcast listener, weād love for you to check us out!
P.P.S. If you liked the episode, please subscribe, share, and/or give the show a review on your favorite podcast player. Every little bit goes a long way. š
Artificial Insights is a part of the Manary.haus family ā¤ļø Come say hi!




