A Prologue to ‘The Legacy’

A Prologue to ‘The Legacy’

By MALKA OLDER

I didn’t want to write another AI story. It was 2019, and these stories seemed to be everywhere in science fiction, and regularly trotted out in journalism as well, usually portraying AI as either a threat or a savior. I had just written a short story imagining an artificial entity that was designed to simulate not intelligence but empathy. But the International Literature Festival of Berlin had asked me to write about AI, and since I wanted to go to the festival, I tried to find a new angle. So I decided to write about a dead one.

What interests me about artificial intelligence is the idea of an entity that thinks – or calculates, or processes – in a way that’s fundamentally different from us. Not just faster but alien, unknowable. That’s what’s exciting: the idea of something that could complement our partial, skewed understanding of the universe with an entirely different partial and skewed understanding.

It is hard to imagine what such an intelligence might look like, but it’s not a stretch to imagine it existing. After all, we have created lots of technology that we understand only imperfectly. Most people can’t explain how a car or a computer or the internet works, and even those who understand, say, the intricacies of how catalytic converters function probably wouldn’t be able to unpack the details of a car’s cruise control or proximity alerts.

As a sociologist, I researched the human and organizational factors involved in the Fukushima nuclear plant accident, and one of the fascinating things I learned was that at various points in the crisis there was literally no one in the world who could prescribe the “correct” course of action. Nuclear power plants are complex enough that there is no single person who can grasp the full picture. And yet we still build nuclear plants. They function – and sometimes malfunction – contributing to the energy we use. What would it be like if the complex thing we couldn’t fully understand was designed to solve problems instead of cause them?

In the story I wrote, “The Legacy,” I didn’t want to imagine an evil AI – that trope has been thoroughly explored – but I could easily imagine a sad one. The main character, Leoka, imagines that the sorrow comes from loneliness. Rereading now, I wonder why I didn’t hint at a melancholy stemming from the AI’s role as the trash compactor of all human problems. As far as we know in the story, Mikhailai, the AI in question, was successful in solving a range of problems — diplomatic, scientific, and mathematical. But was it enough to only fix things that someone else had fractured? Was it depressing for AI to be the excuse for humans to continue recklessly destroying the world, with the assumption that technology would repair whatever they broke?

There’s a long history of robots and AI being used in fiction to represent slaves, workers, and various underclasses; the word robot comes from the Czech word robota, which describes the forced labor of serfs. Initially, robots took over repetitive work that required strength; stories about robot uprisings expressed fears of worker revolt. Machines have replaced a number of jobs, like those of telephone operators and elevator operators, and a good deal of unpaid domestic work. As computers got better at calculation and processing, stories emerged about AI taking over decision-making jobs, replacing leaders and bosses.

Recently, we’ve seen a largely successful takeover of the term by companies promoting their large language models and image generators. We see this “AI” being pushed to replace creative or customer service jobs, not because it’s better at them — as anyone who’s tried to get assistance from a company chatbot can testify — but because it’s supposedly cheaper than labor, when it’s merely better at externalizing costs.

The popular conception of AI is that it has been shrunk to a poor imitator of human behavior. Rather than reveal a dramatically different way of thinking about the world, AI now stands for something that reformulates out of probabilities, homogenizing rather than diverging.

When these large language models “die” or cease to function, as they will, it’s hard to imagine anyone interrogating their remains or digging through the famed proprietary black boxes of their plagiarized source material. No one will be trying to revive them to address urgent future problems because they aren’t solving any problems today.


Malka Older is a writer, aid worker, and sociologist. She is a Faculty Associate at Arizona State University, where she teaches predictive fictions.