The recent goings-on within OpenAI last week reminded me of an article I read several years ago, when self-driving vehicles were still in the experimental stage. As I recall, an engineer was in the passenger seat of a self-driving car, and had programmed the car to turn right at a certain intersection. When the car started to turn left, the engineer tried to stop it and reprogram it to turn right. The vehicle did not stop, and continued turning left.
The engineer referred to the incident as a glitch in the system. But I didn’t take it as a glitch, I took it as a warning.
We’ve had other warnings about the dangers of the kind of technology OpenAI is developing. In the film 2001: A Space Odyssey, for instance, the computer HAL, which controls the space ship Discovery One, kills all the humans on board except Dave Bowman when it learns of their plans to deactivate it.
Going back even farther, I remember watching a cartoon when I was eight, in which Goofy is driving along a highway in a sports car. He loses control of the car and smashes into a tree. There he is, lying on the ground beside his crumpled car, and along comes an ambulance. Hooray! The ambulance stops, two attendants get out carrying a stretcher. They put the stretcher on the ground and lift the car onto the stretcher, carry it into the ambulance, and drive off, leaving Goofy lying dazed by the side of the road.
You can interpret that as a comment on America’s love affair with the automobile. But last week’s events have broadened our satire base. It’s a comment on our love of technology in general. The essence of cartoons is that they make you say things like, ‘That couldn’t happen, but wouldn’t it be funny if it did.”
Well, now it can happen, and it isn’t funny.
OpenAI is the company that brought us ChatGPT and Dall-E (the program that does to painting what ChatGPT does to writing). On November 17, OpenAI’s board of directors announced that they had fired the company’s co-founder and CEO, Sam Altman, apparently because they believed Altman had misled them about his plans for the company’s future. Altman is what in the industry is known as an “accelerator”: he wants to move OpenAI technology ahead as fast as possible, before some other Silicon Valley company gets ahead of it and rakes in the billions of dollars in profits that AI is already generating. Certain members of the board were “decelarationists,” meaning they were worried about what artificial intelligence might do to humankind in the future if it were allowed to develop unchecked by its human creators. They worried about what some future HAL might do to a world full of Daves.
When 95 percent of OpenAI’s employees signed a petition stating they would quit if Altman were not reinstated as CEO, the board dropped the decelerationists from its ranks, brought on a couple of accelerationists, and rehired Altman as CEO. As the Guardian reported, it’s now “full speed ahead” once more at OpenAI.
A few years ago, I wrote a book, Technology, in which I briefly traced the history of technological advances to the present, and concluded that although many technical devices – from ploughshares to pacemakers – have benefitted humans, we have reached a stage at which we are embracing technology without knowing what, if any, its benefits might be. Where technology was originally an extension of science, it had left science behind and was pushing into unknown territory.
I distinguished between technologists and scientists, between those who wanted to unleash technology to see where it took us, and those who worried about the ethics of such technologies as human cloning and gene splicing. As the physicist Barry Commoner told me, “technology is now going where science doesn’t know.” Technology could produce (and patent) a type of corn that is genetically altered to kill corn borers, but, as Commoner noted, science could not determine what else future generations of GMO corn might evolve to do. When science wanted the technology to turn right, it might decide on its own to turn left.
The struggle at OpenAI between accelerationists and decelerationists is a replay of the conflict I described between technologists and scientists. The technologists won that earlier struggle – you cannot find non-GMO corn anywhere now, not even in Mexico, which has officially banned it – and, at least for the moment, the accelerationists are winning this more recent struggle. Like Monsanto, OpenAI is a private company and is more or less being left alone by the government to regulate itself.
Speaking of the government, has anyone else noticed the alarmingly possible parallel between Sam Altman and Donald Trump? A popular leader is felt to be taking his organization in a direction not everyone wants it to take, the leader is elected out of office, after which those who engineered his ejection are replaced by others who think the ex-leader was doing a fine job, and the ex-leader is reinstated.
A little over a century ago, the Czech playwright Karel Capek wrote the play R.U.R., which stands for Rossum’s Universal Robots. In the play, the Rossum factory has produced thousands of robots that were meant to be servants of humans. When the play begins twenty years later, robots have rebelled against their human masters and taken over the world by exterminating all human beings except for a mechanical engineer, Alquist, and Helena, the daughter of a former industrialist. The play ends somewhat ambiguously, with Helena and a humanized robot named Primus assuming the roles of a future Adam and Eve.
Capek meant R.U.R. to be a warning against the dangers of runaway technology. Imagine a filmmaker today making a film of the play. The film would be set twenty years into the future, when accelerationists have created a world run by AI over which humans have almost no control. Imagine a clandestine group of decelerationists making the film available to everyone with a computer. Now imagine your computer refusing to download it.
Thanks, Jeffrey. Yes, I forgot to mention Frankenstein. The material that Rossum made his robots from in R.U.R. was actually some kind of biologic facsimile that made the robots even more human-like than the machines we've come to think of, so more like Frankenstein's monster. Mary Shelley was way ahead of her time: she also wrote a novel called The Last Man, in which all of humanity is wiped out by a pandemic, and only one human being is left alive. (It wasn't Frankenstein or his monster.) I think we should listen to novelists and poets, because they aren't afraid to give expression to our worst fears.
Yes, exactly, Lia. I think we should ask that question about a great many technological "advances." Why do we need this? Why do we need rabbits that glow in the dark? If, instead of devising ways to travel to distant planets, we spent the money fixing this one, we wouldn't need to move. Perhaps it's time to recognize that "Why?" is an ethical question. Thanks for your comment.