Models of Meaning For An Exploding World

An evocative but cheezy AI-generated illustration of TS Eliot's "Love Song of Alfred J Prufrock" with three sirens singing in the street as a skinny bald old man eats a peach, and thick yellow smoke.

I always used to wonder what it would feel like to live through the sudden collapse of a peaceful and integrated society. We like to scare ourselves with this question when we read books or watch movies about military invasions or hostile occupations or war crimes: what if we ourselves ever have to live through the sudden explosion of everything we consider normal?

I once spent a few months reading about Korea, which before 1950 was a country unified by deeply shared historical, societal and cultural roots. Korea suffered a fast explosion in 1950, a division imposed suddenly from outside by warring nations that left the south and north terrified of their own former countrymen. Korea’s deep wound has never healed. In the last few years we watched helplessly as the same vicious pattern befell Ukraine, also split in two by the reckless exploitation of powerful outside forces.

Korea and Ukraine were fast cultural explosions, sudden violent collapses of normalcy that overtook the populations trapped inside. What’s been happening in USA lately feels more like an exhausted implosion than a shocking explosion. USA’s collapse into banal Trumpism and gross corruption has mostly felt absurd and stupid. And, like most absurdist comedy, USA’s ten-year collapse into open corruption and fascism has been painfully slow.

Still, USA’s collapse has felt more like an explosion than an implosion. Families and friendships and work relationships are strained to the breaking point by opposing loyalties or circumstance. I see my neighbors in USA exhibiting behaviors that characterize a badly war-torn country, even though we are not in an open civil war. USA is war-torn because of the wars we fund in Gaza and Asia and Latin America and Africa, and because of our hostility to China, and our regime-change adventurism and reckless murder in Venezuela and the Caribbean.

USA’s crisis feels like an explosion to me because percussive loudness is the tenor of our times. Because we live in the Gasoline Age.

Well …! I didn’t pop back up here on Litkicks to talk about the pathetic political condition of the planet I live on. This is a topic I cover heavily in my monthly podcast at World BEYOND War. I hope you listen to these episodes, which I put a lot of thought into. Here are a few recent shows:

Africa’s Philosophy of Peace: An Interview with Sellah King’oro
Abya Yala is a Zone of Peace: An Interview with Gabriel Aguirre
Blood in the Method: Debating Ukraine with Tommaso Delfanti
Abolition at #NoWar2025 with Tabitha Lean
Ideals and Reality: A Conversation With Tommaso Delfanti
War Abolishers of 2025: Roger Waters, Francesca Albanese and Ralph Nader

I popped up here because I want to start a series of blog posts in the coming year about a topic that is both annoyingly trendy right now and also vastly misunderstood: artificial intelligence and large language models, which I enjoy experimenting with.

I disagree with virtually all the hype out there about how AI and neural networks and large language models (LLMs) are changing our society. I do think they are changing our society, but not in the ways you’ll read about elsewhere. Many tech pundits believe AI is quickly approaching a sort of self-trained perfection, and that a future version of ChatGPT will operate without mistakes and hallucinations. I am not convinced about this. I also think the mistakes and hallucinations we find in today’s popular AI engines are revealing and valuable, and point towards surprising discoveries about human nature that are worth paying attention to.

It brings me joy that AI image generators make such vivid mistakes. I tried to get Gemini to draw me a good magazine-style illustration of T. S. Eliot’s poem “The Love Song of Alfred J Profruck” based on my own prompts. Of course the results have been ridiculous, and yet I’m quite pleased by the surreal downtown cityscape in the picture at the top of the page, even though it doesn’t really capture the sinister and elusive nature of the poem’s curling yellow smoke. And “Secret Things” is not a realistic name for a seedy pub, and the three singing sirens have the physique of Barbie dolls.

The picture at the top of the page is my best version, after several false starts and frustrating sessions in which every request made the picture worse. Jack Kerouac’s “First thought best thought” often holds true for LLM prompts.

I struggled to make Gemini understand that the sirens in Eliot’s poem are seductive women singing on rocks, not fire engines. Just like a person can’t not think about an elephant if you say the word “elephant”, apparently when you say “sirens” to a large language model, it can’t not think about fire trucks.

A pretty cheezy attempt at an AI-generated illustration of TS Eliot's Prufrock, with fire engine sirens, thick yellow smoke and a giant crab.

This preliminary result is bursting with errors. The crab is big enough to eat J. Alfred Prufrock’s head. The title names “Alferd” instead of “Alfred”. We are supposed to be looking at a skyline of St. Louis, Missouri in the first half of the 20th century, before the Gateway Arch was built, but an LLM can’t draw a St. Louis skyline without the Gateway Arch.

These confusions carry meaning. Like night dreams, LLM hallucinations point to hidden truths and inner connections and parallels and sparks of insight. The skinny, stiff, bald J. Alfred Prufrock character in both of these pictures vaguely resembles TS Eliot. But he also resembles the writer William S. Burroughs – another postmodern literary genius from St. Louis, Missouri who might have also been found lurking around the midwestern city’s bawdy streets at night eating a peach, along with Prufrock and Eliot. The resemblance to William S. Burroughs, which I did not suggest, is a deft and subtle touch.

Even more wonderfully, this composite Burroughs/Eliot character also slightly resembles 20th century novelist Henry Miller, author of “Tropic of Cancer”. Henry Miller wasn’t from St. Louis, but he was an American writer who explored the same transgressive urban underground cultures as Eliot and Burroughs in his stories.

These are the kinds of delightfully weird results LLMs often deliver, and this is not accidental. J. Alfred Prufrock and TS. Eliot and St. Louis and William S. Burroughs and Henry Miller all exist as tokens in the LLM called Gemini, and they all have similar proximities based on the texts this LLM was trained by reading. This composite character is the flip side of a hallucination or a mistake. It’s an AI quirk that carries meaning – even though AI experts try to explain an LLM’s quirks away instead of delving into their potential significance.

TS Eliot, William S Burroughs and Henry Miller in separate photos all staring into camera wearing a suit
White male 20th century American authors TS Eliot, William S Burroughs and Henry Miller

LLMs may eventually be able to cover up hallucinations – not by stopping them from happening, but by catching and correcting them before they reach users. But covering up hallucinations is different from thinking without hallucinations. I don’t think either artificial or human intelligence will ever be able to operate without hallucinations, and this is one of many ways I disagree with the conventional wisdom about what intelligence is and how machines can or cannot simulate it.

I’ve got a few contrarian things to say about AI and large language models, and I’m looking forward to using this blog for a freeform exploration of this topic in 2026. I hope this project will inspire me to write more often than I have through the blur of 2025.

It’s unfortunate that so much media coverage of AI and LLMs amounts to exaggerated hype. Tech companies and investors are motivated to advance this hype, and the AI boom reminds me of the dot-com boom that I lived through in the 1990s (I wrote a memoir about this called Ten Years in Silicon Alley).

I’m pretty sure the AI investment boom will crash just like the dot-com boom did in 2000 and 2001, though sadly that crash didn’t stop investors from moving on to an even bigger real estate and arbitrage boom that crashed even worse in 2007 and 2008. I don’t believe a word that any business pundit or investment expert says about AI or LLMs.

As a very experienced software engineer, database architect and web applications developer, I believe that most of the hype about how AI changes the art of coding is off the rails. We have always had Stack Exchange and vast open source repositories to help us. The art of coding isn’t so much about how fast you can produce code; it’s about how you can orchestrate your code, your data sources, your users, your business model and your interfaces into a harmonious perpetual machine. Sure, AI helps. But software development is an art and a science, not a race to turn out lines of code.

Since I live in a society that has been hallucinating about the benevolence of hoarded wealth (also known as “capitalism”) for a couple of centuries, it’s not surprising that so much of the conventional hype about AIs and LLMs has been about the ways they can make humans more efficient and productive. I’ll save my thoughts on the economic impact of AI in the homeland of Ayn Rand for a future blog post.

I am quite sure it’s a myth and a lie that human beings have ever wanted efficiency or productivity. What we actually want is harmonious co-existence and empathy, and if we can start letting each other breathe and live, our productivity will take care of itself. I believe that if efficiency and productivity actually mattered on planet Earth, we would not be constantly at war. War is very inefficient and counter-productive.

When I talk to friends about AI I often find myself opposing cliches faster than I can talk. Everybody wants to hate on AI “slop”, even though AI generated artwork can often be beautiful, and still requires artistic skill and inspiration to create. Writers and authors and editors have valid concerns about how AI will affect their universes, but when I talk to writer friends I often feel that they are worrying about the wrong things and not worrying about other things that can be harder to understand, and require reading articles that aren’t in the Atlantic (please) or the New York Times.

Even though I believe most technology “experts” are completely missing the boat about the future significance of LLMs, I absolutely do believe that AI and LLM technology will be vastly impactful on our society! It already is vastly impactful. One benefit of AI that we are already enjoying is instantanous and free translation from one language to another, which is opening doors and enabling new relationships by removing language barriers all across the world – and yet this positive impact is rarely mentioned when discussing the benefits of AI.

AI is impactful, and will continue to be more and more impactful, but we need to stop framing the conversation around money or work. We need to start listening to what LLMs are telling us about ourselves.

Understanding LLMs is a project for philosophy, psychology and arts. The discovery of neural networks and predictive transformers is rapidly changing the landscape in every field of science and humanities and philosophical inquiry: epistemology, ethics, metaphysics, aesthetics, religious studies, sociology, political science.

If Plato, Aristotle, Rene Descartes, Baruch Spinoza, John Locke, David Hume, Immanuel Kant, William James, Karl Marx, Sigmund Freud, Carl Jung, Bertrand Russell, Ludwig Wittgenstein, Jean-Paul Sartre or Iris Murdoch were able to experiment with LLMs, every one of them would revise and advance their philosophies based on what they’d be able to discover and learn from this innovation.

But I don’t mean they would learn by using it. I don’t picture Wittgenstein or Sartre asking ChatGPT to write their books. They would learn by observing how LLMs behave and understanding how they work (which is not that difficult to understand, really).

They would learn by picking up the vocabulary and taxonomy of deep learning: activation functions, loss functions, reinforcement learning, tool use, resource augmentation, gradient descent. Each of these terms provide metaphors that we will find increasingly useful as writers, psychologists and philosophers in years to come.

The impact LLMs are having on the way we understand human nature is the sort of thing I’ve been wanting to write about for a while, and I have completed a lot of research and experimentation, and I hope to really dig into this topic on this blog in 2026.

We can begin our inquiry by asking exactly what kind of innovation our current AI systems are: an invention or a discovery? This is a key question that points to the deeper question of whether or not AI systems mirror anything in nature. The steam engine was an invention, and so was photography and the digital computer. But Isaac Newton “discovered” gravity, and Albert Einstein’s theory of relativity also seems to be a discovery rather than an invention, because it revealed to us a structure or a system that already existed whether we observed it or not.

Today’s conventional wisdom seems to consider ChatGPT an invention, but I am sure it was more of a discovery.

This reflects the question of the extent to which artificial intelligence is or should be biomorphic. The term “artificial intelligence” itself points towards biomorphism. We use phrases like “machine learning”, “perceptron” and “neural network” and “attention”, because artificial intelligence builds models that imitate or simulate organic aspects of the biological brain. But where do we find biomorphic parallels, and where do we not find them, in the various advanced software models that power ChatGPT, Gemini, Llama, Claude and DeepSeek?

While the earlier vocabulary of AI research favored biomorphic terms, newer phrases like “word embeddings” and “vectors” and “transformer” and “convolution” and “hidden layers” don’t really point towards biomorphism, and yet this is the brilliant vocabulary that led to the launch of ChatGPT and other LLMs. Does that suggest that LLMs are not biomorphic, or are they biomorphic in different ways? This is an important question because it leads to the question of what LLMs can tell us about our own minds. (I believe they can tell us a lot.)

It’s common to say that LLMs are “just big stacks of numbers” or that “nobody at OpenAI or Anthropic or Google understands how it works”. These are definitely over-simplifications. I have studied AI at the level of word embeddings and neural networks and I believe I understand how LLMs work.

LLMs are not just random stacks of numbers. These matrices carry meaning, even though the meaning is found via back-propagation and humans can only peer at the changing numbers from a distance.

As I’ll explain in a future blog post that I’m really looking forward to writing, it’s important to understand that there is meaning in the n-dimensional mathematical matrices of word embeddings that make up the “bodies” of LLMs. We just need to know where to look for the meaning inside these matrices, and how to recognize the meaning when it’s staring us in the face.

It’s common to hear the advent of AI compared to the invention of the steam or gasoline engine. I think we should also compare it to the discovery of DNA in the 1950s and 1960s. This was part of a larger series of discoveries about genetic causality that included revelations about all-important messenger-RNA and amino acids and protein folding and everything else that makes the magic of genetics work.

The key moment in this discovery was James Watson and Francis Crick’s famous announcement in a pub in Cambridge, England in 1953 that deoxyribonucleic acid has a double helix structure which enables infinite replication. What has only recently been learned about this is that Watson and Crick stole a lot of their glory from a co-researcher, Rosalind Franklin. Here is the famous X-Ray diffraction photograph of a DNA molecule that Rosalind Franklin showed Watson and Crick, which led to their public announcement that the structure of DNA must be a double helix.

An x-ray image of a DNA fiber
Photo 51, Rosalind Franklin’s world-changing x-ray photo of DNA.

The role of cellular DNA in genetics was already understood by 1953, but the realization that DNA is a double helix was tremendously significant because a double helix is a structure that allows infinite self-replication while carrying massive amounts of specific information. This was the long-sought solution to a puzzle of genetic science. Here’s an early Watson/Crick model of DNA as a double helix:

A primitive physical model of a DNA molecule resulting from Watson and Crick's research, in a Berlin museum.

The most stunning results acheived in genetic science during the 1950s and 1960s revealed that DNA is a natural container of digital information. Whereas computers store digital data in binary units (and humans, for some reason, use base 10), it seems that DNA and RNA use a base 4 system in which each unit of information is designated G, C, A or T/U, and these are used in triplets as a base 64 system to produce a biological programming language that includes codes for 20 amino acids along with stop/start commands. I’ve only recently begun reading about DNA and RNA, and as a software developer I’m excited to learn (rather late in my life, as I haven’t read much science) that there are powerful biological digital computing devices inside our cells.

I thought it would be fun to use Gemini’s image generator to create a fake “science class” poster illustrating the fact that layers of an LLM and strands of DNA are both containers of digital information. This result is about as humorously banal as I expected it to be when I wrote the prompt:

A simulated science textbook illustration juxtaposing the layers of an AI large language model transformer and a strand of DNA indicating that they are both containers of digital information.

The challenge I am creating for myself in 2026 – my New Years resolution for the coming year – is to stop being lazy here on Litkicks, and to start using this blog to explore questions about society, culture, psychology and philosophy that reflect the AI revolution we are supposedly living through.

I already know what the moral of the story is: we don’t have any idea, here in the 2020s, how AI and LLMs will impact us, just as nobody was able to predict back in 1953 how the discovery of DNA would impact things then.

The punchline here – although there’s nothing funny about the suffering behind this punchline – is that one of the greatest tangible cultural impacts of the discovery of DNA has been the use of genetic testing to investigate crimes and release prisoners who had been unjustly convicted. This is a beneficial use of DNA that has been personally and positively impactful in many people’s lives, and has revolutionized the field of criminal justice.

DNA was discovered as the central unit of genetics in 1953, and there was probably a lot of big talk back then about how DNA would change the world. What nobody realized is that there were prisoners sitting in jails all over the world back then in 1953 who had been convicted on weak or racist evidence. Nobody but the prisoners themselves and their families and friends knew how many people were languishing in jails for decades simply because they looked like somebody else. The first prisoner to be freed from prison after proving with DNA evidence that they had been falsely convicted was Gary Dotson in 1989. It took 36 years for this trend to begin.

Newspaper article about Gary Dotson's exoneration based on DNA evidence.

It took 36 years for somebody to figure out that DNA evidence could be used to help prisoners falsely jailed. It’s only been since then, since 1989, that we have realized how shockingly bad and false and racist our justice systems are, and prisoners are being freed by DNA evidence still today.

This is why I say I don’t believe anyone who thinks they know how LLMs will change the world. ChatGPT became a smash hit with its public launch in November 2022. So maybe by 2056, LLMs will free someone from jail. I hope so.

I’m looking forward to writing more about all of this in 2026. Happy New Year, friends.

3 Responses

  1. HI TKG. Yes, that statement calls to mind the Richard Dawkins “Selfish Gene” theory – which makes a lot of sense and does seem relevant to this whole inquiry!

Leave a Reply

Your email address will not be published. Required fields are marked *

What We're Up To ...

Litkicks turned 30 years old in the summer of 2024! We can’t believe it ourselves. We don’t run as many blog posts about books and writers as we used to, but founder Marc Eliot Stein aka Levi Asher is busy running two podcasts. Please check out our latest work!