What's "fair" when AI trains on your writing, then sells the answers?
Everyone says "fair". Almost nobody defines it. Daniel and Julie dig into what "fair" could mean when AI uses human writing, and why the answer may come down to rights infrastructure.
đ¸ Hey friends, Arianne here, editor and producer of Artificial Insights, the podcast. Glad to have you here!
This is TL;DL where I write about what stood out to me in each episode, share some food for thought, and do a roundup of what happened and whatâs next for those of us who prefer to read.
Letâs read! đ¤
Weâre having a bit of a false spring here in Canada. An unexpectedly warm day, the snow starts retreating, and the kids start asking for their bikes.
And you try explaining to a 3-year-old, âItâs warm right now, but itâs winter, so no, weâre not digging your bikes out of the garage.â
And it just isnât fair.
Thatâs kind of how âfairâ shows up in AI conversations right now. Everyone uses the word, almost nobody defines it, and it can feel like weâve already settled the question.
Underneath the noise is a practical issue that keeps getting treated like a side note: who should get paid when AI uses human work⌠and how?
In this interview, Julie Trelstad helps put shape around it. She names the different kinds of AI use that matter, and she talks plainly about what paying creators could look like if it happens at scale.
đď¸ Just Interviewed: Julie Trelstad on How AI Is (and Isnât) in Publishing
Julie Trelstad has spent 30+ years in book publishing, riding wave after wave of new technology, from desktop publishing and eBooks to print-on-demand and self-publishing.
Today, sheâs Head of US Publishing at Amlet.ai, working on rights and licensing infrastructure for the AI era, and she runs Paperbacks & Pixels, where she helps authors build sustainable publishing and marketing systems.
Julie frames the current debate in a really practical way: questions about AI and human work are turning into an infrastructure problem. The conversation digs into what âAI rightsâ can mean in practice, from training to research to generative use, and why creators need a way to prove ownership, declare permissions, and get paid when their work is used.
If youâve been trying to figure out what âfairâ could actually look like beyond slogans, this episode is worth a full listen.
đĄ One Core Insight: âFairâ needs infrastructure
Julie offers a simple picture of what âfairâ could look like when AI uses human work:
âContent providers should be paid on a token basis, like Spotify artists.â
That one sentence forces the conversation out of vague ethics and into operational reality.
Daniel Manary and I have talked a lot about the impossibility of this. (You know, over coffee at breakfast, while the kids are running loose. What else do you do with your spouse?)
Once a model is trained, itâs not pulling a neat list of sources into the thing it generates. Itâs just absorbed patterns across a huge corpus. Expecting a citation inside a generated artifact isnât supported by the current LLM architectures. Itâs more like⌠it listened to all of Disco, and then wrote a Disco song.
Which specific track do you footnote? Can you even do that?
Even in the best case, determining if a piece of content was necessary for generating something else would be extremely expensive. Like, training a bunch of LLMs for each citation expensive.
And, even if you did have infinite budget, you would only get an answer like "this source influenced this output," not "this source caused this output" or "this output quotes this source".
So, if we actually care about fairness, we have to talk about the moments where payment and permission can be made real.
Julieâs framing helps! She separates training from other kinds of use, like research and generation, and she argues that creators need a way to declare whatâs allowed and get compensated based on how their work is actually used.
At the same time, the âgood stuffâ is moving behind paywalls, which makes high-quality data harder to access through the open web alone.
And we want our AI trained on high-quality data. As Julie says, if weâre thinking about a healthcare model, for example:
ââYouâre gonna wanna build that on textbooks and peer reviewed journal content and not on the comments from WebMD. And if you donât know where your content is coming from, itâs gonna be harder to say this is high quality and itâs gonna hallucinate lessâŚâ
The practical question becomes: if we want AI built on high-quality human work, how do we build systems where permission and payment happen at the right layer, without pretending the model is a search engine with footnotes?
đ One Key Clip: The âGood Stuffâ Is Disappearing
âHigh value content is being put behind paywalls⌠the good stuff is disappearing from the web.â
This bonus clip is short, but it surfaces a real constraint that a lot of AI conversations skip over. We talk about âbetter modelsâ as if the inputs are a given. But, Julie reminds us that if the best material gets locked down in response to scraping, then builders will have a pretty serious data access problem.
Thatâs where her argument for rights infrastructure comes in. If you want small, high quality models trained on textbooks, journals, and serious research, you need a way to know where content came from, whether you are allowed to use it, and how to license it⌠without months of back and forth emails and lawyers.
The clip also reframes the fairness debate. We know creators are owed compensation for their work⌠but itâs about more than that now. Itâs also about whether high quality knowledge stays discoverable and usable at all, or whether it retreats into private silos.
Like in the olden days where knowledge and learning was gated by universities and only accessible to those who were able to go there.
If you build AI products, or you publish anything you care about protecting, this one is worth the few minutes.
𼥠One Takeaway: Separate âTrainingâ from âUsingâ
I think a lot of the heat in AI content rights debates comes from treating every kind of use as the same thing.
Training is one thing. A model takes in a mountain of text then compresses it into weights. You donât get a clean audit trail back out. Thatâs why âjust cite the sourcesâ sounds reasonable⌠but misses the mechanism.
Using is another thing. When an AI product searches, retrieves, summarizes, or quotes, itâs operating much closer to a traditional content workflowâyou can cite. That is the layer where attribution, permission, and payment can actually be enforced. Itâs also the layer where creators can plausibly be compensated based on real usage.
If we want creators to be treated fairly, the question is more than just, âshould they be paid?â, itâs also where can fairness be enforced in a way that actually holds up in the real world?
đŹ When you say âAI should be fair to creators,â what would you want that to mean in practice?
đĽ Up Next: Sharmeen Aqeel on Trust, AI, and Founder Velocity
Sharmeen Aqeel is a product design leader turned founder, building Lyyvora, a lending marketplace for healthcare clinics. She is using AI everywhere it helps her move faster, from prototyping to outreach to borrower lender matching, and sheâs equally clear about where she refuses to use it.
Up next, Daniel and Sharmeen talk about how AI can accelerate almost everything⌠but it canât borrow trust on your behalf. They get into what she automates, what she keeps human, and how she thinks about building a network and a community that will outlast whatever the next wave of tools makes âeasyâ.
The podcast episode drops this Friday! Follow along in your favorite player so you get notified. đ Or check out the YouTube Channel and hit the đ.
⨠When âFairâ Moves from Principle to Practice
For years, the default assumption was that if the content is online, someone will scrape it. If creators wanted protection or compensation, they lock it down with paywalls.
AI companies took advantage of that and mostly got away with it. Yes, Anthropic got a $1.5B bill⌠but, if you look at it closely, it was for using pirated content, not for scraping content⌠soâŚ
Now, high-value writing is moving behind paywalls, creators are looking for ways to prove ownership and set permissions, and builders are realizing that âjust use the open webâ isnât actually a stable data strategy if you care about quality.
Julieâs point is that this whole game only works long-term if thereâs a middle layer that can do the boring parts well: the ability to fingerprint content, declare which kinds of AI use are allowed, and make compensation frictionless enough that it happens in real workflows.
This makes me think of the conversation Daniel had with Dr. K last year on how the tyranny of convenience will always be something weâll have to deal with as a speciesâtech companies are going to inevitably take the path of least resistance.
I wonder what it will look like.
As always, thanks for listening. đ
P.S. Artificial Insights is a podcast on how AI is changing work, lifeâand us. Every other Friday, Daniel Manary sits down with leaders, thinkers, and builders in AI to have candid conversations on what theyâre doing right now and how they think the world will change. If youâre a podcast listener, weâd love for you to check us out!
P.P.S. If you liked the episode, please subscribe, share, and or give the show 5 stars. Every little bit helps! â
Artificial Insights is a part of the Manary.haus family â¤ď¸ Come say hi!




