Maybe We Already Have Runaway Machines

By admin Dec5,2023

Most of us aren’t quite sure how we’re supposed to feel about the dramatic improvement of machine capabilities—the class of tools and techniques we’ve collectively labelled, in shorthand, artificial intelligence. Some people can barely contain their excitement. Others are, to put it mildly, alarmed. What proponents of either extreme have in common is the conviction that the rise of A.I. will represent a radical discontinuity in human history—an event for which we have no relevant context or basis of comparison. If this is likely to be the case, nothing will have prepared us to assimilate its promise or to fortify ourselves against the worst outcomes. Some small fraction of humanity is, for whatever reason, predisposed to stand in awe or terror before the sublime. The rest of us, however, would prefer to believe that the future will arrive in a more orderly and sensible manner. We find a measure of comfort in the notion that some new, purportedly sui-generis thing bears a strong resemblance to an older, familiar thing. These explanations are deflationary. Their aim is to spare us the need to have neurotic feelings about the problems of tomorrow. If such hazards can be redescribed as merely updated versions of the problems of today, we are licensed to have normal feelings about them instead.

This isn’t to say that these redescriptions invariably make us feel good, but some of them do. When these arguments take up the sensationalized threat associated with a particular technology, they can be especially reassuring. In this magazine, Daniel Immerwahr recently proposed that the history of photography ought to make us feel better about the future of deepfakes: the proliferation of photographic imagery was coextensive with the proliferation of such imagery’s manipulation, and for the most part we’ve proved ourselves able to interpret photographic evidence with the proper circumstantial understanding and scrutiny—at least when we feel like it. Immerwahr’s argument isn’t that there’s nothing to see here, but that most of what there is to see has been seen before, and so far, at least, we’ve managed to muddle our way through. This is an inferential claim, and is thus subject to the usual limitations. We can’t be certain that human societies won’t be freshly destabilized by such state-of-the-art sophistry. But the role of a historian like Immerwahr is to provide us with a reference class for rough comparison, and with a reference class comes something like hope. The likelihood of technological determinism is counterposed with a reasonable, grounded faith that we have our wits about us.

There is another kind of deflationary argument, one more often associated with the broader and more generic threat of machine intelligence rather than with a discrete technology like deepfakes. The logical premise here is not the precedent of human savvy but an analogy to human fallibility. This might be called the “Since when were we so great?” approach. ChatGPT hallucinates a lot of nonsense? Our own brains hallucinate a lot of nonsense. Facial recognition misidentifies criminal suspects? Eyewitnesses routinely misidentify criminal suspects. There are more and less sophisticated versions of these claims. One liability of this way of thinking is that an emphasis on our own crappiness, alongside reasons of convenience, laziness, or technological fetish, might function as an alibi—an excuse to grant unearned and unexamined authority to machines. The adverse consequences of this attitude include runaway feedback loops: below-average pilots crash planes, we compensate with an increasing reliance on automated systems, the automated systems make allowances for even worse pilots (who lack the skill or training to recover the craft when the automated system fails). But, on the whole, the contrast between inscrutable machines and terminal human mediocrity is once again supposed to be soothing. Machines might make mistakes, but these mistakes aren’t really qualitatively different from the mistakes we ourselves make. Machines, one might add, are subject to iterative engineering. Once we give up our irrational bias in favor of human rationality—once we realize that we should prefer self-driving cars that get into occasional horrific crashes to human-driven cars that get into frequent horrific crashes—we’ll resign ourselves to our dependence on these tools in the way that we reconciled ourselves to reliance on the plow.

Both of these modes of consolation—by precedent or by analogy—rest on the assumption that history is continuous, that change is more or less gradual and invisible to those experiencing it, and that humanity will remain in a position to address new complications in the way it has addressed, with varying degrees of success, prior ones. All of these assumptions may very well prove correct, and it’s not at all unreasonable to make them, or to find them persuasive. They provide a warrant to worry about the things we’re used to worrying about rather than the things we don’t even know how to worry about. But they ignore, elide, or diminish the possibility, however far-fetched, that we might have to confront a genuine discontinuity.

The crux of the residual case for doom is the alignment problem—the prospect that an artificial superintelligence might have no use for human values. (These values are usually under-specified but might include the belief that life is sacrosanct, humans have dignity, etc.) An advanced A.I. wouldn’t even need to be hostile; indifference is equally unfortunate. You can give an advanced black box of a machine a clear directive and find that it executes its goals without sentiment: you ask a housecleaning robot to reduce untidiness and it kills your dog. One of the ways to relieve oneself of this apocalyptic concern is to decide that intelligence and values are necessarily correlated, that an entity of sufficient intellect would perforce absorb our ethical concerns. In this case, however, no argument by continuity—by precedent or analogy—could possibly make us feel better: we are acquainted with the existence of murderous sociopaths. Murderous sociopaths are a kind of existence proof of what people in the A.I.-safety community call the orthogonality thesis—the idea that intelligence and morality exist in different dimensions. For the past few hundred thousand years, we’ve been able to contain murderous sociopaths before they confronted us with total annihilation. But a machine superintelligence that takes the form of a murderous sociopath, or becomes in any case allied to a conventional murderous sociopath, might very well command resources of an unprecedented order. It would be both genuinely bad and genuinely new. Such an existential threat might seem wildly implausible or remote. But it, by definition, would seem to flout any attempts to provide historical easement.

At first glance, David Runciman’s “The Handover: How We Gave Control of Our Lives to Corporations, States, and AIs” seems poised to furnish just such a reassurance—to domesticate the alignment problem. Runciman, a witty and refined writer, is a professor of politics at Cambridge and a contributing editor to the London Review of Books, which has also provided a home for his enlivening podcasts. He is also the Fourth Viscount Runciman of Doxford, a title that inspires not inconsiderable faith in his commitment to historical continuity. Runciman’s basic argument, which unfolds in the elegantly shaggy manner of a Peripatetic seminar, is that the alignment problem is not in fact an anomaly, and that the coming singularity might best be historicized as the Second Singularity. Runciman identifies a precedent and an analogy for the alignment problem in our relationship to two other artificial agencies: the state and the corporation. This idea, in his account, first emerged in 1651, when Thomas Hobbes described the Leviathan as, in Runciman’s words, “a kind of robot.” Hobbes, of course, didn’t call it a robot—a term that first appeared in the nineteen-twenties—but he did elaborate his vision of the state as an “Artificiall Man; though of greater stature and strength than the Naturall, for whose protection and defence it was intended.” As Runciman glosses the concept, “We assemble it in the same way we might construct any other machine of moving parts. This one is designed to resemble its creators. But it can do things we cannot. It is much more powerful than we are. That’s why we built it in the first place.” The state has the supra-human advantages of durability and reliability, and serves as a means of coördination.

Runciman continues, “The Leviathan is not a problem-solving machine, like a smartphone. It is a decision-making mechanism, like a set of dice. It might not give you the right answer—it will simply give you an answer.” In theory, these answers reflect the will of the polity. “But because it comprises people,” he writes, “once the people get better at coming up with answers, the state should too. These dice can think for themselves.” In practice, however, Hobbes is generally regarded as a pessimist. Of the Leviathan, Runciman writes: “Its argument assumes that humans are incapable of peaceful co-existence without franchising out our decision-making power to an unaccountable higher body. At that point, we will have lost control of our fate, because whatever the state decides becomes our choice too.” In Hobbes’s time—of the well-known “war of all against all”—these powers were arbitrary and absolute, but this was the price to be paid for security. Runciman readily concedes that we have spent the past few hundred years constructing better mechanisms for democratic accountability, but “the basic machinery of the modern state continues to bear the hallmark of a mechanistic understanding of politics set out nearly four centuries ago. We still have governments that take decisions for us on matters of life and death, including war and peace.” He continues,

We are bound by those decisions whether we like them or not. The state has the power to enforce its choices against anyone who resists, though it may not choose to use it. Most of the time, in contemporary democracies, these arbitrary powers are sufficiently buried beneath layers of legal limitations and political pushback that we barely notice them. But sometimes, in a crisis, they reveal themselves. Then we must acknowledge that we are still living with the Leviathan—older, wiser, but recognisably the same as before.

Without belaboring the comparison, Runciman elaborates a case for the state, which binds human reason in a lattice of rules and procedures, as an algorithmic device. The twinned essential functions of the state—the ability to issue debt and the prerogative of war—have undergirded its durability and afforded it a capacity for long-term planning that has thus far eluded other forms of human organization. For those of us privileged enough to escape the brutality of the state in the service of imperialist extraction, the state has considerably enhanced our longevity, wealth, and happiness. But the state, even if it comprises the human, is not itself human. “Our ability to fulfil our potential—to live longer, richer, more varied lives—was made possible only by handing over some of our capabilities to artificial versions of ourselves, which do not suffer from our natural frailties. We have had to give up features of our essential humanity—some of what Arendt would call our capacity for independent action—to enjoy these other benefits,” Runciman writes.

So it should come as no surprise that the state cannot be counted upon to demonstrate a commitment to human values. It has an artificial life of its own, even as it acts through us—when we are asked to kill in wartime, we are granted leave from human values. States are “persons without scruples,” Runciman notes, and we have had hundreds of years to grow familiar with this contrivance and its discontents. The machine-learning era is, in some sense, merely an extension of the mechanical one. “Let’s not kid ourselves that the age of AI—with the looming challenges of black-box decision-making and algorithmic procedures whose outcomes are a mystery even to their creators—poses a unique challenge for human understanding. Just look at your own life. Do you fully understand where the group decisions come from that shape who you are?,” Runciman writes. “The fact that groups are made out of people doesn’t make them any easier to see inside than other kinds of machines. The black boxes are all around us already. In one sense, at least, they are us.”

One of the things that make the machine of the capitalist state work is that some of its powers have been devolved upon other artificial agents—corporations. Where Runciman compares the state to a general A.I., one that exists to serve a variety of functions, corporations have been granted a limited range of autonomy in the form of what might be compared to a narrow A.I., one that exists to fulfill particular purposes that remain beyond the remit or the interests of the sovereign body. Corporations can thus be set up in free pursuit of a variety of idiosyncratic human enterprises, but they, too, are robotic insofar as they transcend the constraints and the priorities of their human members. The failure mode of governments is to become “exploitative and corrupt,” Runciman notes. The failure mode of corporations, as extensions of an independent civil society, is that “their independence undoes social stability by allowing those making the money to make their own rules.” There is only a “narrow corridor”—a term Runciman borrows from the economists Daron Acemoglu and James A. Robinson—in which the artificial agents balance each other out, and citizens get to enjoy the sense of control that emerges from an atmosphere of freedom and security. The ideal scenario is, in other words, a kludgy equilibrium.

By admin

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *