Chaos in the Cradle of A.I.

By admin Nov28,2023

In the 1991 movie “Terminator 2: Judgment Day,” a sentient killer robot travels back in time to stop the rise of artificial intelligence. The robot locates the computer scientist whose work will lead to the creation of Skynet, a computer system that will destroy the world, and convinces him that A.I. development must be stopped immediately. Together, they travel to the headquarters of Cyberdyne Systems, the company behind Skynet, and blow it up. The A.I. research is destroyed, and the course of history is changed—at least, for the rest of the film. (There have been four further sequels.)

In the sci-fi world of “Terminator 2,” it’s crystal clear what it means for an A.I. to become “self-aware,” or to pose a danger to humanity; it’s equally obvious what might be done to stop it. But in real life, the thousands of scientists who have spent their lives working on A.I. disagree about whether today’s systems think, or could become capable of it; they’re uncertain about what sorts of regulations or scientific advances could let the technology flourish while also preventing it from becoming dangerous. Because some people in A.I hold strong and unambiguous views about these subjects, it’s possible to get the impression that the A.I. community is divided cleanly into factions, with one worried about risk and the other eager to push forward. But most researchers are somewhere in the middle. They’re still mulling the scientific and philosophical complexities; they want to proceed cautiously, whatever that might mean.

OpenAI, the research organization behind ChatGPT, has long represented that middle-of-the-road position. It was founded in 2015, as a nonprofit, with big investments from Peter Thiel and Elon Musk, who were (and are) concerned about the risks A.I. poses. OpenAI’s goal, as stated in its charter, has been to develop so-called artificial general intelligence, or A.G.I., in a way that is “safe and beneficial” for humankind. Even as it tries to build “highly autonomous systems that outperform humans at most economically valuable work,” it plans to insure that A.I. will not “harm humanity or unduly concentrate power.” These two goals may very well be incompatible; building systems that can replace human workers has a natural tendency to concentrate power. Still, the organization has sought to honor its charter through a hybrid arrangement. In 2019, it divided itself into two units, one for-profit, one nonprofit, with the for-profit part overseen by the nonprofit part. At least in theory, the for-profit part of OpenAI would act like a startup, focussing on accelerating and commercializing the technology; the nonprofit part would act like a watchdog, preventing the creation of Skynet, while pursuing research that might answer important questions about A.I. safety. The profits and investment from commercialization would fund the nonprofit’s research.

The approach was unusual but productive. With the help of more than thirteen billion dollars in investment from Microsoft, OpenAI developed DALL-E, ChatGPT, and other industry-leading A.I. products, and began to turn GPT, its powerful large language models, into the engine of a much larger software ecosystem. This year, it started to seem as though OpenAI might consolidate a lead ahead of Google, Facebook, and other tech companies that are building capable A.I. systems, even as its nonprofit portion launched initiatives focussed on reducing the risks of the technology. This centaur managed to gallop along until last week, when OpenAI’s four-person board of directors, which has been widely seen as sensitive to the risks of A.I., fired its C.E.O., Sam Altman, setting in motion a head-spinning chain of events. By way of explanation, the board alleged that Altman, who came to OpenAI after running the startup accelerator Y Combinator, had failed to be “consistently candid in his communications”; in an all-hands meeting after the firing, Ilya Sutskever, a board member and OpenAI’s chief scientist, reportedly said that the board had been “doing its duty.” Many interpreted this as signalling a disagreement about A.I. safety. But OpenAI employees were not convinced, and chaos ensued. More than seven hundred of them—almost all of the company—signed a letter demanding the board’s resignation and Altman’s reinstatement; meanwhile, Altman and Greg Brockman, a co-founder of OpenAI and a member of its board, were offered positions leading an A.I. division at Microsoft. The employees who signed the letter threatened to follow them there; it seemed possible that OpenAI—the most exciting company in tech, recently valued at eighty-six billion dollars—could be toast.

Today’s A.I. systems often work by noticing resemblances and drawing analogies. People think this way, too: in the days after Altman’s termination, observers compared it to the firing of Steve Jobs by Apple’s board, in 1985, or to “Game of Thrones.” When I prompted ChatGPT to suggest some comparable narratives, it nominated “Succession” and “Jurassic Park.” In the latter case, it wrote, “John Hammond pushes to open a dinosaur park quickly, ignoring warnings from experts about the risks, paralleling Altman’s eagerness versus the caution urged by others at OpenAI.” It’s not quite a precise analogy: although Altman wants to see A.I. become widely used and wildly profitable, he has also spoken frequently about its dangers. In May, he told Congress that rogue A.I. could pose an existential risk to humanity. Hammond never told park-goers that they ran a good chance of getting eaten.

By admin

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *