Meeting Users Where They Are in an AI-Native World
Personalization has been redefined. AI-native systems can adapt to who someone is the moment they ask a question. Daniel Manary chats with Alex Maier, President of onWater.
đ Hohoho! Arianne here, editor and producer of Artificial Insights, the podcast. Welcome! This is TL;DL where I write about what stood out to me in each episode, share some food for thought, and do a roundup of what happened and whatâs next for those of us who prefer to read.
Merry Christmas!
AI is really exciting.
And, also, AI can be pretty dangerous.
In this conversation, Alex Maier put words to that balance in a way I liked: he talked about putting your âkid eyesâ on again. You know. That posture of curiosity where you start asking âwhat if?â and then actually, fearlessly, mean it enough to do something about it.
Then, he immediately paired it with the reality that you still need your âparent eyesâ on too, because the systems that feel playful and empowering can also be leaky, persuasive, and hard to audit.
This episode is full of product thinking, grounded in lived constraints. After all, outdoor recreation is one of those domains where a wrong answer could be more than just annoying⌠it could be fatal.
(The outdoors is a dangerous place.)
đď¸ Just Interviewed: Alex Maier on AI meeting people where theyâre at
I didnât have to sort, didnât have to filter. You met me where I was at. And, that was really the thesis behind this thing.
Alex Maier comes to AI product work from a place that is surprisingly physical.
Before leading marketing and product at onWater, he worked in textiles, shipped prototypes overseas, helped scale Nikeâs swimwear category, and brought human-powered outdoor apps to market. His career has consistently lived at the intersection of bodies, gear, and environments that donât forgive bad assumptions.
Now, heâs at onWater and it is not a novelty app. It pulls together messy, high-stakes data from weather systems, government agencies, conservation partners, and user communities to help people decide where, when, and how to get on the water safely⌠Fishing conditions, river flows, blue-green algae blooms, permits⌠regulations.
(I donât spend very much time outside. This is all Greek to me.)
When Alex talks about âmeeting users where theyâre at,â he is not talking about channels or funnels. He is talking about mental state, experience level, and intent in the moment someone asks a question like, âShould I go paddle this river today?â
The goal is to simultaneously surface information faster and remove frictionâa system that knows when the right answer is âNo, not today!â and is opinionated enough to say so.
đĄ One Core Insight: What if users donât need your product anymore?
Alex described the question that excited his team the most.
What if users didnât actually need the app? đ¤
No, not because the product failed⌠but because AI changes the starting point entirely. Instead of opening an app, learning its structure, filtering, sorting, and triangulating information, users can now begin by asking a question in plain language. Where should I go fishing this weekend? Are conditions safe on this river today? What do I need to know before I head out?
Alex framed this as a shift from AI-assisted to AI-native thinking. AI-assisted helps you search faster. AI-native reshapes the experience so discovery starts with intent, not interfaces.
Most products are designed around the (sometimes unintentional) assumption that users will adapt to them. Alex flips that around. If your system already has the data, the maps, the regulations, and the context, why make people hunt for it?
The harder part is what comes next. Once the system gives an answer, it has to help make that answer actionable. Directions, gear considerations, safety warnings, and sometimes a clear âdonât go.â
đ One Key Clip: Why âpersonalizationâ no longer covers it
In the bonus clip, Alex goes after a word we use constantly and rarely define carefully: personalization.
He argues that whatâs emerging now does not fit the old meaning. This is not about segmenting users or tailoring messages based on past behavior. It is about understanding someoneâs intent, experience level, and confidence in real time, simply from how they ask a question and what they do next.
He talks about working trade shows early in his career, watching how people approached a booth. Did they linger? Did they avoid eye contact? Did they come in hot with questions, or circle back three times before speaking? All of that information shaped how a sales person ought to respond.
What AI does well is formalize that pattern. And, it does it so well because it is formulaic. It reads language, tone, and follow-up actions, then adjusts how it responds. That is why people experience it as âmeeting them where theyâre at.â
I think that raises the bar. If your system canât adapt to who someone is in the moment, calling it personalized misses the point.
𼥠One Takeaway: AI-native reduces uncertainty
Daniel shared a line on LinkedIn that helped crystallize this episode for me: being AI-native is not just about speed. It is about reducing uncertainty.
What I appreciated about Alexâs framing is how grounded it is. He is not describing AI as a magic layer that makes everything faster. He is describing systems that automatically do what good humans have always done well.
They read the room. They notice hesitation. They adjust based on tone, language, and follow-up behavior.
They do all that many, many, many, many times.
Automation scales scripts, while AI-native systems scale room-reading.
But, this begs the question! If AI is mediating context and reducing uncertainty, then it is also making judgments that used to belong only to people. Some of those judgments are safe to automate. Some probably are not.
How can we tell which are which?
đĽ Up Next: David Proulx on AI that never hallucinates
The next conversation takes the themes of this episode into a very different, much higher-stakes domain.
David Proulx is the Chief AI Officer at HoloMD, where he helped build an AI agent that has handled more than 100,000 mental health conversations without a single hallucination.
Thatâs right.
No hallucinations.
If questions of trust, context, and responsibility plague you at night, you will want to listen to this one when it drops.
⨠Wonder, With Guardrails
This episode reminded me that we live in a very exciting time.
AI has cracked open a space where imagination feels permitted again. Where asking better questions matters more than memorizing interfaces. Where creativity shows up not just in art, but in how products meet people in the middle of real life.
We should all have our kid eyes on!
As always, thanks for listening! đ
P.S. Artificial Insights is a podcast on how AI is changing work, lifeâand us. Every other Friday, Daniel Manary sits down with leaders, thinkers, and builders in AI to have candid conversations on what theyâre doing right now and how they think the world will change. If youâre a podcast listener, weâd love for you to check us out!
P.P.S. If you liked the episode, please subscribe, share, and/or give the show a review on your favorite podcast player. Every little bit goes a long way. đ
Artificial Insights is a part of the Manary.haus family â¤ď¸ Come say hi!




