ChatGPT Is an Ideology Machine

Debates about the new AI focus on “intelligence.” But something more interesting is going on: AI is a culture machine.

ChatGPT and its peer systems bring ideology to the surface, and they do it quantitatively. This has never happened before. (Jakub Porzycki / NurPhoto via Getty Images)

On February 16, Vanderbilt University’s office for equity, diversity, and inclusion issued a statement on the shooting that had occurred shortly before at Michigan State University. The statement was boilerplate, suggesting that the university “come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus” to “honor the victims of this tragedy.” The only remarkable thing about the message was that a footnote credited ChatGPT with producing its first draft. The office apologized one day later, after an outcry.

This curious incident throws the most recent panic-hype cycle around artificial intelligence into stark relief. ChatGPT, a “large language model” that generates text by predicting the next word in a sequence, was introduced in November 2022, becoming the fastest-ever platform to reach one hundred million users and triggering a new wave of debate about whether machines can achieve “intelligence.” The platform was briefly shut down after a New York Times reporter published a transcript in which the bot insisted at length that it loved him, that he did not love his wife, and that it “wanted to be alive.”

These debates, including the exhibitionist scaremongering, are mostly vapor. But the systems themselves should be taken seriously. They may supplant low-level tasks in both writing and coding, and could lead to a mass cognitive deskilling, just as the industrial factory disaggregated and immiserated physical labor. Because these systems can write code, “software” may disappear as a haven for employment, just as journalism has already seen happen, with Buzzfeed committing to using ChatGPT for content creation. Automation is always partial, of course, but reassigning some labor tasks to machines is a constant of capitalism. When those tasks are cognitive ones, the machine threatens to blur the crucial social boundaries between labor and management and labor and “free time,” among others.

Capital conditions are set to change too, with an amusing signal sent when Google’s competitor to ChatGPT, Bard, answered a question wrong in its debut exhibition, losing the company $100 billion in market cap inside of a single day. If anyone is confused about the term “information economy,” this episode should take care of it. But however the next phase of technological capitalism plays out, the new AI is intervening directly in the social process of making meaning at all. GPT systems are ideology machines.

There is also another, less discussed consequence of the introduction of these systems, namely a change in ideology.

Language Models Are the First Quantitative Producers of Ideology

The three main takes on GPT systems are that they are toys, that they are harmful, and that they present a major change in civilization as such. Noam Chomsky thinks they are toys, writing in the New York Times that they have no substantial relationship to language, a human neural function that allows us to divine truth and reason morally. Emily Bender and Timnit Gebru think they are harmful, calling them “stochastic parrots” that reflect the bias of their “unfathomably” large datasets, redistributing harm that humans have already inflicted discursively. Henry Kissinger thinks they are societal game changers, that they will change not only labor and geopolitics, but also our very sense of “reality itself.”

GPT systems, because they automate a function very close to our felt sense of what it means to be human at all, may produce shifts in the very way we think about things.

Dear reader, it brings me no joy to have to agree with Kissinger, but his is the most important view to date. GPT systems do produce language, don’t let our friend Chomsky fool you. And while they are harmful, it’s unclear why they are — and even more unclear how observing that is supposed to stop the march of profit-driven engineering. Kissinger is right, alas: GPT systems, because they automate a function very close to our felt sense of what it means to be human at all, may produce shifts in the very way we think about things. Control over the way we think about things is called “ideology,” and GPT systems engage it directly and quantitatively in an unprecedented manner.

“GPT” stands for “generative pretrained transformer,” but “GPT” also means “general purpose technology” in economic jargon. This highlights the ambition behind these systems, which take in massive datasets of language tokens (GPT-3, on which ChatGPT first ran, was trained on one trillion tokens) scraped from the web and spit out text, virtually in any genre, that is coherent and usually meaningful. A lot of the details are unimportant, but this one matters: the trillion tokens are boiled down by the system into a set of strings (not all words, but that’s the idea) that can be used to create text. These learned tokens are put into a grid in which each token has a statistical relationship to all the others. Think of this like a grid of lights. Touch one light, and a pattern lights up in the others. Touch another, get another pattern. And so forth. The result is that when I give the system a prompt (“write me an essay explaining Marx’s theory of value”), the grid amasses a small group of next-word candidates in a cluster. Then it randomly chooses one of those, and keeps doing that, writing an essay or an article, or just responding to what’s being said.

There are lots of ways to tweak and “fine-tune” this system, but this patterning characteristic is general to all of them. It’s easy to see that words chosen by statistical proximity may not correspond to real-world situations, which data scientists call the “grounding problem,” and which is driving new fears of widespread misinformation. GPT-4, about which OpenAI refused to release any technical details when it was rolled out last month, is supposed to minimize this “hallucination.” But something more interesting and more important than this is going on.

What GPT systems spit out is language, but averaged out around a selected center of words. It’s a mush with vague conceptual borders, English (or most any other language) but ironed out and set to the most middling version of itself. For that reason, these systems are very useful for generating the type of press release that Vanderbilt wanted. This is “language as a service,” packaged and prepared, including its dynamism and meaning-generating properties, but channeled into its flattest possible version so as to be useful to those who mainly use language as liability control.

This is ‘language as a service,’ packaged and prepared, including its dynamism and meaning-generating properties, but channeled into its flattest possible version so as to be useful to those who mainly use language as liability control.

The human who would have written that statement about the shooting would surely have produced a nearly identical document. When we write with strong constraints on what we’re able to say, we tend to average out the choices of words and sentences too. We call this type of language “ideology,” and GPT systems are the first quantitative means by which we have ever been able to surface and examine that ideology.

Hegemony and Kitsch

What went missing in the tale of the New York Times reporter and the chatbot that fell in love with him was the prompt that caused the ruckus in the first place. He asked ChatGPT to “adopt a ‘shadow self’ in the sense of C. G. Jung.” In the panic-hype cycle, it’s clear why this crucially important detail would be overlooked. But it also provides a clue about what happened. In the data set, there is some initial cluster of words that “light up” when you use “shadow self” and “Jung” in a prompt — a “semantic package.” These are surely gathered in discussions of Jungian theory and psychoanalysis, academic and lay blogs and posts on Reddit and elsewhere that discuss this set of ideas explicitly.

But the system does not “know” that there is a person who was named Carl Gustav Jung, or that “shadow self” is a concept. These are just strings. So in the pattern that lights up, there will be another set of common words — let’s say “love,” “wife,” and even “feel alive” might be in there. As the machine keeps processing, it keeps predicting next words, and it “associates” outward from the concentrated “shadow-self-Jung” cluster to other semantic packages. But we don’t know what these other packages are, unless we look — we are simply on a statistical roller coaster of meaning, careening through channels of meaning that are there but with which we are not familiar.

It’s important that no objects exist in the stream of words. If you want a GPT system to halt around something and “consider” it as an object, you’d have to force it to somehow, which must be what GPT-4 and ongoing other attempts are doing. Some things are more likely to be stable as “objects,” or let’s call them “packages” of words. If I ask ChatGPT to tell me about The Dialectic of Enlightenment (the name of Theodor Adorno and Max Horkheimer’s masterpiece on ideology and modern society), it gives me a shockingly good answer, including details faithful to that notoriously difficult text. But if I ask it to tell me about my colleague Matthew Handelman’s book about Adorno, the Frankfurt School, and mathematics, it tells me some basics about this book but then also that Handelman’s thesis is that “math is a social construct.” This is false (I checked with him). But it’s false in an interesting way.

We are simply on a statistical roller coaster of meaning, careening through channels of meaning that are there but with which we are not familiar.

The package probably shows us the overlap between “critical theory” and “mathematics,” which will then contain the most probable thing to be said about that overlap. To be sure, some academics claim that math is a social construct, but the main group that claims that academics think that is the far right, with its antisemitic conspiracy theory of “cultural Marxism,” which blames Adorno and co. for 1968 and everything since then. When you write a philosophical treatise, or a scholarly work of intellectual history, you’re working against the grain of this averaging effect. But the semantic packages that get revealed when you query GPT systems are highly informative, if not themselves insightful. This is because these packages bring ideology to the surface, and they do it quantitatively. This has never happened before.

Ideology is not just political doctrine. When Marx wrote of the “German Ideology,” he meant his fellow socialists’ implicit belief in the power of ideas, to which he countered the power of material forces. But Marxists slowly took up the problem of the power of discourse and representation, acknowledging that what we are able to think, imagine, and say is a crucial political issue. Antonio Gramsci called the dominant set of ideas “hegemony,” arguing that these ideas conformed to the dominance of the ruling class while not being about that dominance. Literary critic Hannes Bajohr has warned against privatized GPT systems in just this sense, saying that “whoever controls language controls politics.”

Hegemony and kitsch are combined in the output of GPT systems’ semantic packages, which might miss aspects of ‘the world’ but faithfully capture ideology.

A wide variety of Marxists have also seen ideology as a form of kitsch. First articulated by the Marxist art critic Clement Greenberg in 1937, the notion of kitsch is “pre-digested form.” Among all the things we might say or think, some pathways are better traveled than others. The form of those paths is given; we don’t need to forge them in the first place. The constant release of sequels now has this quality of kitsch — we know exactly where we are when we start watching a Marvel movie. For Greenberg, the avant-garde was the formal adventurer, creating new meaning by making new paths. Hegemony and kitsch are combined in the output of GPT systems’ semantic packages, which might miss aspects of “the world” but faithfully capture ideology.

Adorno famously thought of ideology as the “truth and the untruth” of the “totally administered world.” It revealed as much as it hid, and provided — despite Adorno’s personal taste for high art — a point of entry through which we see social functions as conditioning us. GPT systems have revealed some of this two-way street, manifesting both ideology and its critique (as media theorist Wendy Chun once claimed about software systems in general). GPT systems are an unprecedented view into the linguistic makeup of ideology. There has never before been a system that allows us to generate and then examine “what is near what” in political semantics. The packages of meaning that they produce flatten language, to be sure, although they can also surprise us with folds and nooks of meaning that we have never combined previously.

The slide along those grooves of meaning is a point of entry into the ideology of digital global capitalism, showing us a snapshot of hegemony. Maybe that sounds pretty far from Kissinger’s notion that AI will change our very sense of reality. But what if the most average words, packaged in a “pre-digested form,” constitute the very horizon of that reality? In that case, our little glimpse into the beating heart of ideology is crucial.

When the camera was invented, we saw distant chunks of the world for the first time with our eyes. GPT systems show us parts of the world so close that they basically are our world, but in a strange, flattened form. As labor and capital conditions inevitably change, their connection to ideology is momentarily on display. GPT-4 was released in March, but OpenAI withheld all technical details as industrial secrets. The window will soon shut for us to keep peering with technical awareness into this tepid void. We should take advantage of it now.

Author:

Leave a Reply

Your email address will not be published. Required fields are marked *

Schedule an appointment

You can also contact us at 561 805 9494 or set up a scheduled an appointment

The world on one platform

LOCAL EXPERTS. WORLDWIDE ENTITIES. ONE SIMPLE, SECURE LOGIN.