
Yevgeny Zamyatin’s 1924 novel We tells the story of a future society run in strict adherence to mathematical order. Train timetables are considered sacred, and mathematics is the highest vocation. Citizens are assigned numbers rather than names and paired by the state with predetermined sexual partners.
They also spend their lives in buildings entirely made of glass, the better for government officials to ensure that everyone is playing their orderly part in the smooth running of a mechanistic society.
D-503, the novel’s protagonist, chances upon a rebellious young woman, I-331, who shows him that, outside the Green Wall that sets the boundaries of the One State, there are humans who live free lives in the world of nature. They are spontaneous, rebellious, and human. Eventually, the government becomes aware that D-503 has deviated from his state-established course by conceiving a child with his partner. He is more or less lobotomized into obedience.
We was banned in the Soviet Union until the end of the 1980s, just before the Soviet system collapsed. Sovietism itself depended on a high level of surveillance, not achieved through glass walls, of course, but through bugged homes, wiretaps, and informants. History has known many tyrannies, but totalitarianism, a dominance so complete that it extends to every corner of life, is only possible with technology that can surveil. Today, there are new means of surveilling, means more effective than any telephone bug or nosy neighbor. But the gloom of supervision that casts a shadow over us today is not just that of the government, but that of private industry, too.
We know, of course, and have known for a while, that our online browsing habits have been tracked by advertisers to more successfully push upon us their goods. But something new, more potentially destructive, has arrived: the large language model. This tool gives corporations and governments an unprecedented level of power and access to watch and manipulate ordinary people.
Its power and operation are parasitic. The information you put on the internet is what brings the LLM into being, sustains it, and increases its power. It feeds off of each chatbot prompt, every social media post, almost every online action. The more of this it gets, the stronger it becomes. The stronger it becomes, the more it can be used as a tool to surveil and to manipulate you.
What to do in this uncomfortable state of affairs? Intelligent laws and regulations will help, but, in the present, you can make an act of defiance. You still have the opportunity, right now, to step beyond the Green Wall, out of your glass house, and into the fresh, free air.
Online privacy has been an issue since the creation of the internet. In the 1990s, parents and educators were very cautious about children posting anything online, especially anything personal. “Anyone can look at that and it stays on there forever,” I remember hearing. In time, tech companies grew adept at overcoming this hesitancy. By appealing to our natural tendencies toward social connection, status, distraction, novelty, fear, and outrage, they lured us into spending countless hours on their sites and apps, building up their store of advertising data.
What is new now is that the ability of industries and governments to analyze and sort that data has taken a huge leap forward. An LLM can now rapidly collate the data about you from across a host of platforms and media. Your keystrokes and clicks, your high school photos, your Google searches, your purchases, your friendships, your emails, and on and on. The LLM can build a thorough predictive model of you. The government is already using these tools to track illegal immigrants and to target enemies. Who knows what companies are doing behind closed doors?
But it’s more than this. Every time you use a chatbot, the things you say or type are absorbed by the LLM. It feeds off of your style of expression, your questions, your responses, to build and shape itself into a more precise simulacrum of you. As the LLM calculates a more accurate linguistic picture of you, it becomes better at saying what you want to hear, at drawing you in and addicting you to interacting with it. This picture of you is accessible to the company that owns the chatbot; it becomes very easy for them to peer into your habits, ideas, interests, and manners of expression.
This gives them not just advertising power, but the power to prompt you. The legal scholar Cass Sunstein has already talked about using chatbots to “nudge” people toward more “desirable” behaviors. He denounces manipulation, but the line between a “behavioral nudge” and “manipulation” is a thin one. As the LLM builds a more precise predictive model of you, calculating a sophisticated sense of what appeals to you and what does not, it gets better at leading you and shaping your thinking. It is clear that whoever holds the records of all our chatbot conversations, or whoever uses an LLM to track and predict your actions, possesses an immense power.
Why does the idea of the glass world of We, or the oversight in a novel like 1984, make us uneasy? Because there are parts of our lives that touch so close to home that they must be protected from a crowd of onlookers: words of love shared to a spouse, words of correction or affection to a child, even simple moments alone, unobserved. These are sacred to us. To be watched by unknown eyes would be a desecration, an intrusion upon our inner sanctum, a violation of the hallowed boundaries that mark the dignity of our life as unique and uniquely ours.
But the LLM, or rather those who own it, can see even better than windows and eyes would allow, for we voluntarily feed into it our hidden thoughts and insecurities, our shameful habits, our knowledge, and our questions.
It would be a mistake to think that the LLM is “spying” on us. LLMs cannot spy because they are not alive. They are algorithms that run on electrical hardware, which can track and sort to an astonishing degree and at bewildering speed those signs and symbols that we enter into them. Those signs and symbols, however, may be read by people on the other side of the LLM, those who hold the keys.
The gist is this: Now, more than ever before, our lives are surveillable and manipulable at scale. But we have some agency in this regard. The more we opt out, the less of us the LLM, and therefore its owners, has.
It used to be said that necessity is the mother of invention. In the LLM age, the adage may be reversed. Today, invention is often the mother of necessity. The LLM does all kinds of things we do not need it to do. It summarizes and analyzes for us what we could have done better ourselves. It answers questions that books, conversations, lectures, and articles could have answered much better. It shortens what doesn’t need to be shortened, composes what doesn’t need to be composed, and makes us increasingly dependent in areas where we once had competence.
In certain cases, the need for an LLM only arises once we have already built a habit of using it. When we fork over to the LLM our reading, thinking, writing, and so on, our own capacity diminishes, and we become dependent upon the machine until it is leading us along by the nose. This is where the manipulative potential comes in. The more we use the LLM, the worse we become at analyzing, thinking through, and taking in information, and the more we become dependent on the answers the chatbot gives us. And these answers can be shaped by whoever is behind the wheel.
We can distinguish here between the use of an LLM to replace substantive writing, thinking, analyzing, composing, decision-making, judging, and so on, from both rote and highly technical uses. For sorting vast quantities of technical data and identifying patterns, machine learning algorithms are excellent. For automating certain rote tasks, perhaps, they have their use as well. When judging whether it is appropriate and beneficial to use LLMs to complete a task, we ought to ask certain questions. Is this diminishing my own capacity? Is it making me less free? Is it disconnecting me from reality?
I propose, as a general rule of thumb, that we ought not to use an LLM for anything that we were capable of doing well ourselves with other tools before the LLM came along. Writing, reading, analyzing, communicating—these obviously are well within our wheelhouse. So too are matters of design and artistic production. Moral decision-making is most definitely the province of humans alone. Because LLM chatbots use us as much as we use them and because we degenerate in the process, we ought always to use other means of achieving our goals when we reasonably can.
This is especially so when it comes to anything we could have achieved through collaboration and conversation with others. Suppose you have a question about your career. How much more delightful and fulfilling would it be to identify a mentor who could show you the way? It might be a little harder. You might have to be nervous and a bit embarrassed and a bit frustrated, but you will be richly rewarded. Suppose you are wondering about some abstruse academic topic. Try books and articles—you will gain much from the act of reading that the LLM cannot deliver, like the time and breath to do your own reflecting, digestion, and analysis and the growth that comes from struggling to understand and argue with ideas on your own And if you are still stumped, consider emailing someone, calling someone, or meeting someone for coffee. Who knows, you might make a friend. And how much better would that be than sitting alone in the dark, staring at a screen?
And then there is communication. Consider how much better and richer it is to sit across the table from someone at a deli than to communicate online. What wonderful surprises await you in such a conversation. If you want to hear opinions and thoughts that are statistical averages of online content, prompt an LLM. If you want to encounter something much more mysterious and rich, talk to any human.
Consider how lovely it is to receive a letter, written in someone’s own unique hand. Think of the fact that no one else, no one who wasn’t intended to, has peered into this intimate communication. Everything you write online is stored on the servers of the companies that own the platforms by which you write what you write. On the other hand, nobody owns your letters, and nobody can rifle through them to find out what it is you like to buy, what your private opinions are, or what it is you say to your lover. Opening someone else’s mail is illegal.
LLMs will produce outputs that sound, to us, somewhat human. But it is not the real deal. In themselves, the bits and bytes have no objective meaning. They are charged transistors. We have decided, by convention, to match these charges with symbols, which, by convention, have meaning to us as a community of human language speakers. This is why I have suggested previously in these pages that those technologists who think they are talking to a person when they use an LLM are delusional.
In the same piece, I suggested that LLMs should not be presented to the public as though they really are thinking, feeling, judging things. There are things tech companies could do to make it clear to users that these tools are not persons at all, like giving them non-human names and designing products that do not present themselves to users with language that creates the illusion of life, thought, and feeling. Anthropic, which has given its algorithm a human name and talked incessantly about its feelings, wishes, and hopes, has presented its LLM tools in personal terms that are totally unnecessary and frankly dishonest.
We probably cannot, in the near term, convince the tech companies to do anything about this. Our addiction and increasing debility is how their bread is buttered. In the meantime, we still have the option to opt out. We can read books, we can think in silence, we can go and observe the natural world, talk with friends, write letters, struggle with problems, and triumph over them. Or learn from failure.
If an LLM really helps you do something worth doing, that you should be doing, that you couldn’t do before, and that isn’t robbing you of more than it gives, go ahead and use it. It might be hugely helpful and productive. But for everything else, there is a great big wide world out there, and you are still allowed to live in it.
















