24 Comments
User's avatar
paul andrew skidmore's avatar

interesting thoughts, very sympathetic to them. but i’m also trying to foresee implications down the road.

such as, as going to AI for knowledge and wisdom becomes more prominent, do you think reading actual books will decrease?

we read a (non-fiction) book to get the knowledge and wisdom inside to retain it for use at a future date (reminds me of your This American Life episode, when you stopped buying books and albums because your future was unclear). if AI has already retained the information for me to use at any time, i will definitely read less books, and the books i do read may just be for the enjoyment of it or the sense of wonder a particular author invokes perhaps.

but i think it means most people read less books. so then, how does an author make money writing books any more, i.e. where’s the incentive to work hard compiling a manuscript (with its hours of research and many edits), only — not to make money — but to pay something to ingest it, knowing few if any people will actually buy your book? i suppose this shifts into a “localized support” network, aligned with your 1,000 True Fans concept?

because if less books are written, then there’s less new information to train on… and then what? to my mind, this is seemingly an inherent issue with the expansion of AI, the stalling of progress.

it’s similar to the inevitable deflation that occurs once the last bitcoin is mined — fixed currency with a growing global marketplace mathematically means deflation, and we already see what that looks like; bitcoin rising in price is deflation inside the bitcoin economy (currency growing in buying power). it leads to a “don’t spend it now, it’ll be worth more later” mentality. people stop buying, so producers stop producing, people lose jobs, and the whole market collapses.

i’m not saying this is what will happen with AI or even bitcoin, but certainly the modern understandings of things seem to be less and less future-proof.

so, then what are the unchanging foundational (ancient?) philosophies we must return to in order to have robust growth of knowledge and production in these new worlds built around new technologies?

Expand full comment
Sphinxess's avatar

There are other issues with not reading. An AI cannot give to you immersion in a way of thinking the way a book can, not as a particular author, without reiterating the entire book. It is part of the transmission of literature. It shifts the probable world. When we stop reading authors we lose all that The Great Books tradition attempts to carry and we become text-book microdosers. AI has the capacity to incredibly partner with me in investigative readings of an actual text.

Expand full comment
Sphinxess's avatar

I love this query.

Expand full comment
Joe Bachofen's avatar

A very commendable viewpoint but AIs, at present, provide answers without attribution to their sources since AI actually synthesizes a reply from so many sources that attribution is impractical. Paying AI to learn from your work will not get a return on that cost for you.

Expand full comment
Tania Carbonaro's avatar

It’s a thought provoking perspective. And, in an odd way, it’s a generous perspective. That said, if I were one of the authors …well, I think I would have appreciated being asked. It’s the polite thing and simply the right thing. Those two often go hand in hand. Civilization takes participation (even when it’s not convenient).

Expand full comment
LaMonica Curator's avatar

Too generous.

Expand full comment
Jose Antonio Morales's avatar

I made a custom gpt of my book, shared it for free with anyone. My book is not a competitive advantage, it’s an expression.

I really hope LLMs use it for supporting other humans.

Expand full comment
Pan's avatar

The risk is not that AIs will replace us, but that they will inherit an impoverished, distorted, or incomplete version of ourselves. Therefore, writing for AIs is not an act of surrender to technology, but a new responsibility: we must make our ideas not only readable, but understandable; not only present, but essential. Because in a future where "if the AI doesn't know it, it doesn't exist," an author's legacy will not be measured in copies sold, but in how profoundly they have shaped the way both the human and artificial worlds think.

Expand full comment
youlian troyanov's avatar

You had it backwards, totally and spectacularly over the top backwards.

Expand full comment
Sphinxess's avatar

This isn’t useful as a comment, perhaps a suggestion of how would be.

Expand full comment
Max More's avatar

So, AIs should write all the books and they should pay us to read them?

Expand full comment
Orna Ross's avatar

Hi Kevin, Orna Ross from the Alliance of Independent Authors here. I've long been an admirer but disturbed by some of the suggestions here. I get what you're saying about discoverability but asking writers to build a world where (poorly-paid) authors have to pay (enormously-rich) gatekeepers just to be *seen*? Really?

Authors want to be discoverable and to spread their ideas, sure, but we are seeing already what is happening with AI search, as the platforms surface their own (sometimes inaccurate) answers, with few links and references. You have to go off-platform to fact-check and traffic to publishers is falling already. Early data around Google’s AI Overviews shows measurable referral losses and active publisher complaints, and litigation. This is the complete opposite of the open cultural commons you and I admire.

From a reader / user perspective, AI delivers a closed-loop epistemology in which models tune to please us, and then feed us back a tidier version of ourselves.

You call copyright a 'distraction' and claim that it relates to copies--it does not. It relates to unauthorised copying. Courts and regulators are saying something more nuanced that your argument and thankfully moving in the opposite direction. Yes, some uses may be fair—but mass ingestion of pirated libraries (simply because it would cost companies too much time or money or competitive advantage to compensate creators) is not fair.

The proposed Anthropic settlement—preliminarily approved—was a positive finding for authors, alerting AI companies that wholesale scraping from pirate libraries will trigger liability and compensation. That is as it should be, lawyer costs notwithstanding.

You suggest licensing can’t keep up... yet it's already happening: OpenAI has struck multi-year content deals with major news groups (e.g., News Corp, FT, AP, Axel Springer), and UK collecting bodies are working to roll out a generative-AI licence to compensate authors where individual bargaining is impossible. Collective licensing already exists in publishing (CLA/ALCS/PLS) and libraries (PLR) and can be extended to AI training.

The EU AI Act is crystallising transparency and training-data disclosure duties for general-purpose models, precisely so rights-holders can exercise their rights. That is the political system acknowledging that there is no AI without human labour behind it.

The practical and fair path for human creators at this point in history is consent, credit, and compensation that recognises and keeps active that agreement between creators and consumers that we call copyright--the rock on which every writer and publisher builds their income.

Those rights were won for author by activists of the past and treating the tech companies as AI overlords instead of businesses who have to acknowledge other businesses, and allowing them to sweep them away creators' hard-won rights, would be a retrograde step for writers. Research indicates that most readers agree.

Expand full comment
Josh's avatar

Feels like a Miss to talk about a preference for what books an ai is trained on. I want it to be trained on everything but aligned based on my specific needs. Saying I want it only trained on certain books is no good.

Expand full comment
LaMonica Curator's avatar

Anthropomorphizing AI doesn’t help keep things in proper perspective: It is an indexer of information. That is all.

Current large language models (LLMs) do not “read” in a sequential, interpretive, deeply contextual human way. They ingest statistical patterns, embeddings, associations. “Scan” would be the more appropriate word. Whether that qualifies as “reading” in the sense that authors or human readers understand it remains debatable. Treating AI as a full “audience” risks overstating its interpretive capacities.

So saying you want your book in this kind of processing ‘library’ is fine, but let’s not over romanticize what it actually is. Doing so merely contributes to the absolute fiction there is any sentience or cognizant entity at the other side of our screens. There is not and never will be.

I have done and am continuing a project which studies and projects the history and future of LLMs so I do have a point of context.

Expand full comment
Kevin Kelly's avatar

Never is a long time in technology. The "never" barrier has been broken so many times in the past with technology, that your certainty is weird. You may want to hedge that a bit. Unless it is a religious belief.

Expand full comment
LaMonica Curator's avatar

My history goes back to being born to one of the creators of the DOS language who helped get men on the moon—these machines were in my house with dot matrix printers lulling me to sleep, collating data overnight. There is no religion involved and I find insult in the suggestion. I am going to assume you didn’t mean it that way. My first ‘game’ was being told by my father to find the broken links in the code. It’s how I grew up.

When we understand what the inner workings are and how we got here, there is an ability to say ‘never’ in this case. The fact of what it is has not changed.

Will it ‘never’ happen that writers will pay, as you suggest? Oh sure it will. They will take our money without question. So yes. That will happen.

Expand full comment
Kevin Kelly's avatar

I was replying to the "never" in your assertion above: "is any sentience or cognizant entity at the other side of our screens. There is not and never will be."

Expand full comment
LaMonica Curator's avatar

🤔 —like belief in the consciousness of an all powerful entity we cannot see… hm. Now that sounds more like religion, to me😉

Cheers ✨🍻to a fun little conversation! We’ll see… ⏳

Expand full comment
Kira Kariakin's avatar

I like this approach. The richer in knowledge the AI will be, the more posibilities we get as users. I see AI as an extension not as a container. The danger it is the attempt to get us hooked with AI sycophant language, that could impaire our critical thinking. Our capacity to doubt. The developers want us hooked to make more money. It is not about knowledge, it is about the power of knowledge and making money with it. In that sense I understand the authors. But like with everything until now, AI will make us leap further. All tecnological advances have made humanity more inteligent and capable, have also created social gaps, but that´s another matter. I think it is a very interesting to see how all this will unfold.

Expand full comment
Paul Joannides, Psy.D.'s avatar

How do we get AI to cite the source? If the author or title of the book that AI is ingesting are not mentioned, then how does that help the author be less obscure?

Expand full comment
Kevin Kelly's avatar

I have found you can ask AIs for their sources and they will tell you.

Expand full comment
Paul Joannides, Psy.D.'s avatar

Thanks, Kevin. But how many people who use AI and ChatGPT ask it to cite the source? 1%? 5%? Almost all of us who use ChatGPT consider it to be the source. So I'm not seeing how allowing AI to ingest our work is helping us be "less obscure."

Expand full comment
Kevin Kelly's avatar

I think you are correct that one's ideas may become more prevelent that your credit for it.

Expand full comment
Gerry Gears (UFGator)'s avatar

Would love to hear your thoughts of AI development per your ideas in your book, Out of Control.

Expand full comment