Survey on AI existential risk scenarios

Abstract

It has been argued that AI could pose an existential risk. The original risk scenarios were described by Nick Bostrom and Eliezer Yudkowsky. More recently, these have been criticised, and a number of alternative scenarios have been proposed. There has been some useful work exploring these alternative scenarios, but much of this is informal. Most pieces are only presented as blog posts, with neither the detail of a book, nor the rigour of a peer-reviewed publication. For further discussion of this dynamic, see work by Ben Garfinkel, Richard Ngo and Tom Adamczewski.

The result is that it is no longer clear which AI risk scenarios experts find most plausible. We think this state of affairs is unsatisfactory for at least two reasons. First, since many of the proposed scenarios seem underdeveloped, there is room for further work analyzing them in more detail. But this is time-consuming and there are a wide range of scenarios that could be analysed, so knowing which scenarios leading experts find most plausible is useful for prioritising this work. Second, since the views of top researchers will influence the views of the broader AI safety and governance community, it is important to make the full spectrum of views more widely available. The survey is intended to be a first step in this direction.