heading

Paths untaken

The history, epistemology and strategy of technological restraint, and lessons for AI

symbol

This post first appeared on Verfassungsblog.

If the development of certain technologies, such as advanced, unaligned AI, would be as dangerous as some have suggested, a long-termist legal perspective might advocate a strategy of technological delay —or even restraint—to avoid a default outcome of catastrophe. To many, restraint–a decision to withhold indefinitely from the development, or at least deployment, of the technology–might look implausible. However, history offers a surprising array of cases where strategically promising technologies were delayed, abandoned, or left unbuilt, even though many at the time perceived their development as inevitable. They range from radiological- and weather weapons to atomic planes, from dozens of voluntarily cancelled state nuclear weapons programs, to a Soviet internet, and more. It is easy to miss these cases, or to misinterpret their causes, in ways that lead us to be too pessimistic about future prospects for restraint. That does not mean that restraint for future technologies like advanced AI will be easy. Yet investigating when and why restraint might be needed, where it is viable, and how legal interventions could contribute to achieving and maintaining it, should be a key pillar within a long-termist legal research portfolio.

The question of restraint around AI development

In a famous 2000 essay, entitled ‘Why the Future Doesn’t Need Us’, computer scientist Bill Joy grimly reflected on the potential range of new technological threats that could await us in the 21st century, expressing early concerns over the potentially extreme risks of emerging technologies such as artificial intelligence (AI). Drawing a link to the 20th- century history of arms control and non-proliferation around nuclear and biological weapons, Joy argued that since shielding against the future technological threats was not viable, “the only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous.” While Joy’s account reflected an early and in some ways outdated understanding of the particular technological risks that could threaten the long-term trajectory of civilization, the underlying question he posed remains an important and neglected one: if protecting our long-term future required us to significantly delay or even relinquish the development of certain powerful but risky technologies, would this be viable?

Whether strategies of technological delay or even restraint have merits is a crucial consideration for anyone concerned about society’s relation to technology in both the near and the long-term. It is particularly key for the long-termist project of legal prioritization, in determining how the law should aim to shape the development of new technologies.

The tension is most acute in discussions around the potential future impacts of increasingly capable and potentially transformative AI. While there are many models of what this could look like–such as ‘High-Level Machine Intelligence’, ‘Artificial General Intelligence’, ‘Comprehensive AI Services’, or ‘Process for Automating Scientific and Technological Advancement’–there is a general agreement that if viable, these advanced AI technologies could bring both immense societal benefits, but also potentially extreme risks if they are not properly ‘aligned’ with human values.

Given these risks, it is relevant to ask whether the world ought to continue pursuing the development of advanced AI, or if it would be better that AI progress is slowed–or even halted–until we have overwhelming evidence that someone can safely ‘align’ the technology, and keep it under control. However, even if we conclude that restraint is needed, some have been skeptical whether this would be viable, given the potentially high strategic stakes around advanced AI development, and the mixed track record of non-proliferation agreements at shackling past competition between major states. But should we be so pessimistic? To understand the viability of technological restraint, it helps to examine when technological restraint has been possible in the past. This can help us answer the vital question of what legal, institutional or policy interventions could increase the prospects of this being achieved in the case of AI, should that be necessary.

The ethics of restraint

Long-termist philosophy has had a complex ethical relation to the prospect of advanced AI. On the one hand, long-termist thinkers have been amongst the earliest proponents of the idea that this technology could be existentially risky.

However, on the other hand, long-termist perspectives have generally favoured the idea that advanced AI systems not only will, but should be developed eventually. This is because they argue that safely aligned AI systems could, hypothetically, be a tremendous force for long-term good: they could bring tremendous scientific advancement and economic wealth to the world (e.g. what Sam Altman has called ‘Moore’s Law for Everything’. Aligned AI could also be critical to our long-term survival, because today, and in the future, faces many other global catastrophic risks, and successfully addressing these might necessitate the support of advanced AI systems, capable of unlocking scientific problems or seeing solutions that we cannot. Moreover, under some moral frameworks, AI systems could themselves have moral standing or value.

Nonetheless, there are conditions under which even a long-termist perspective might advocate a strategy of aiming to relinquish (certain forms of) AI permanently. This could be because we expect that safe alignment of advanced AI systems is astronomically [unlikely. Indeed, the more pessimistic we are, the more we should prefer such a strategy of ‘containment’. But if we assume that technological restraint is indeed desirable, would it be viable? While there is an emerging body of long-termist work exploring the viability of strategies ofdifferential technological development’ (i.e. speeding up defensive and safeguarding technologies, while slowing down hazardous capabilities), proposals to outright contain and avert advanced AI systems have not previously received much attention in the debate. This is often because of a perception that military-economic competitive pressures can create strong incentives to develop powerful technologies even at some risk—or that even if we were to convince some actors to delay, that this would just yield the field to other, less scrupulous AI developers, who are less likely to be responsible or use the technology for good. In this view, both unilateral and coordinated restraint by key AI actors (e.g. private companies or states) are unlikely: the former, because of the strong advantages AI technology would provide; the latter, given the difficulty of negotiating, monitoring and enforcing multilateral global bans around high-stakes, dual-use, and fast-moving technologies such as AI.

The history of restraint

However, this may be too pessimistic. In fact, surveying the historical track record of technological innovation provides a long list of candidate cases of technological restraint. These include cases of remarkable delays in development, of decades or even centuries, seemingly (1) because low-hanging fruit was not perceived (e.g. the 1733 introduction of flying shuttles to weaving after five thousand years), (2) because of inventors’ and investors’ uncertainty about the legal status of the technology (e.g. mechanized sawmills in England were unpursued for centuries on the basis of a widespread–and mistaken– belief that Parliament had outlawed them); (3) because of local cultural preferences (electric cars held equal US marketshare in the 1900s, but soon declined in use because cars using internal combustion engines became seen as more masculine, and appealed to an aspiration for touring); (4) because of top-down governmental policies or choices (e.g. a 250-year Tokugawa Shogunate firearms ban); (5) because of narrow political or bureaucratic infighting (e.g. delays in the Indian nuclear weapon program and the early Cold War Soviet ICBM force); or (6) simply because earlier technological choices had funnelled industrial attention and resources away from particular scientific paths (e.g. early neural network-based AI approaches may have been delayed by decades, because a ‘hardware lottery’ locked in early attention and funding for symbolic AI approaches instead).

There are also cases of the deliberate non-pursuit or abandonment of envisioned technologies, for a range of reasons: (1) concerns over risks and treaty commitments led the US to refrain from pursuing a wide range of proposals for exotic nuclear delivery systems, nuclear ramjet-powered cruise missiles, and advanced space-based missile defence systems such as Projects Excalibur, Zenith Star, and Brilliant Pebbles; (2) treaties led to the end of Vietnam-era weather control programs; (3) Edward Teller’s plans for ‘continent destroyer’-scale nuclear weapons with a 10-gigaton yield (670,000 times Hiroshima) were left on the drawing board; (4) the majority of the ~31-38 nuclear weapons programs undertaken or considered were abandoned; (5) the US, UK, and Soviet Union abandoned ‘death dust’ radiological weapons; (6) the Soviet Union committee pulled the plug on OGAS, an early ‘internet’; (7) diverse bioweapon programs were slowed or limited in their efficacy; (8) the US abandoned Project Westford, an ‘artificial ionosphere’ of half a billion copper needles put in orbit to ensure its radio communications; (9) French hovertrains fell prey to state elite’s conflicts of interest, nuclear-powered aircraft fell prey to risk concerns and costs; and (10) in the early ‘90s DARPA axed its 10-year Strategic Computing Initiative to develop ‘machines that think’, instead redirecting funding towards more applied computing uses such as modelling nuclear explosions, amongst many others

It is key to note that this survey may be a low estimate. These are only the cases that have been publicly documented. There may be far more instances of restraint where technology was considered and abandoned, but we do not have clear records to draw on them; or cases where the absence of their widespread application today gives us no reason to even understand that they were ever meaningfully on the table. Or where we (falsely) believe the decision to abandon pursuit simply reflected an accurate assessment that these were unviable.

The epistemology of restraint

Studying cases of past restraint highlights an epistemic challenge that we should keep in mind when considering the future viability of restraint over AI or other powerful technologies. Namely, that the way we retrospectively understand and interpret the history of technological development (and extrapolate to AI) is affected by epistemic hurdles.

For instance, the appearance of dangerous new weapons and the visceral failures of arms control loom particularly large in historical memory; in contrast, we fail to see proposed-but-unbuilt technologies, which are more likely to end up as obscure footnotes. This ensures we are prone to under-estimate the historical frequency of technological restraint, or misinterpret the motives for restraint. Often, in cases where a state decided against pursuing a strategically pivotal technology for reasons of risk, or cost, or (moral or risk) concerns, this can be mis-interpreted as a case where ‚the technology probably was never viable, and they recognized it–but they would have raced for it if they thought there was a chance’.

Of course, it can be difficult to tease out the exact rationales for restraint (to understand whether/how these would apply to AI). In some cases, the apparent reason for why actors pulled the plug does indeed appear to have been a perception (whether or not accurate) that a technology was not viable or would be too costly; or a view that it would be redundant with other technologies. In other cases however, the driving force behind restraint appears to have been political instability, institutional infighting, or diplomatic haggling. Significantly, in a few cases, restraint appears to have reflected genuine normative concerns over potential risks, treaty commitments, international standing, or public pressure. This matters, because it shows that perceived ‘scientific unviability’ is not the only barrier to a technology’s development; rather, it highlights a range of potential intervention points or levers for legal and governance tools in the future. Ultimately, the key point is that while the track record of restraint is imperfect, it is still better than would be expected from the perspective of rational interests–and better than was often expected by people living at the time. From an outside view, it is important to understand epistemic lenses that skew our understanding of the future viability of restraint or coordination for other technologies (such as advanced AI), in the same way that we should reckon with ‘anthropic shadow’ arguments around extinction risks.

The strategy of AI restraint

In sum, when we make an assessment of the viability of restraint around advanced AI, it is important to complement our inside-view assessment, with an outside-view understanding of technological history that does not only count successful international non-proliferation initiatives, but also considers cases where domestic scientific, political, or legal drivers contributed to restraint.

Of course, to say that we should temper our pessimism does not mean that we should be highly optimistic about technological restraint for AI. For one, there are technological characteristics that appear to have contributed to past unilateral restraint, including long assembly roadmaps with uncertain payoff and no intermediate profitable use cases; strong single institutional ‘ownership’ of technology streams, or public aversion. These features appear absent or less strong for AI, making restraint less likely. Similarly, AI technologies do not share many of the features that have enabled coordinated bans on (weapons) technologies, making coordinated restraint difficult.

As such, the history of restraint does not provide a blueprint for a reliable policy path. Still, it highlights interventions that may help induce unexpected unilateral or coordinated restraint around advanced AI. Some of these (such as cases of regime change) are out of scope for legal approaches. Yet, legal scholars concerned about the long-term governance of AI can and should draw on the emerging field of ‘technology law’, to explore the landscape of technological restraint. They can do so not only by focusing on norms or multilateral treaties under international law, but also through interventions that frame policymaker perceptions of AI, alter inter-institutional interests and dynamics, or reroute investments in underlying hardware bases, locking in ‘hardware lotteries’.

Ultimately, technological restraint may not be desirable, and the prospects for any one of these avenues may remain poor: yet restraint provides a key backup strategy in the long- termist portfolio, should the anchoring of AI ever prove necessary to safeguarding the long-term.

Acknowledgements

This essay reflects work-in-progress, and has built on useful comments and feedback from many. I especially thank Tom Hobson, Di Cooke, Sam Clarke, Otto Barten, Michael Aird, Ashwin Acharya, Luke Kemp, Cecil Abungu, Shin-Shin Hua, Haydn Belfield, and Marco Almada for insightful comments, pushback, and critiques. The positions expressed in this paper do not necessarily represent their views.

symbol

Cite as: Maas, M. (2022, August 9). Paths untaken: The history, epistemology and strategy of technological restraint, and lessons for AI, VerfBlog. https://verfassungsblog.de/paths-untaken/, https://doi.org/10.17176/20220810-061602-0.

Matthijs Maas is Senior Research Fellow at the Legal Priorities Project and Research Affiliate at the Centre for the Study of Existential Risk, University of Cambridge.

Join our newsletter

If you are interested in our work, subscribe to our newsletter! Roughly once a month, you will receive a short email with our latest publications, open positions, and upcoming events.