Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Vulnerable world hypothesis
Existential risk concept

The vulnerable world hypothesis or the "black ball" hypothesis refers to the idea that civilizations may likely be destroyed by some disruptive technologies (a black ball) unless extraordinary measures are taken against the scenario from happening. The philosopher Nick Bostrom introduced the hypothesis in an initial publication in 2019 in the journal Global Policy and later further discussed in a 2022 essay published in Aeon along with co-author Matthew van der Merwe. The hypothesis is quoted in discussions about the safety of advanced technologies.

We don't have any images related to Vulnerable world hypothesis yet.
We don't have any YouTube videos related to Vulnerable world hypothesis yet.
We don't have any PDF documents related to Vulnerable world hypothesis yet.
We don't have any Books related to Vulnerable world hypothesis yet.
We don't have any archived web articles related to Vulnerable world hypothesis yet.

Background and definition

Bostrom illustrated the hypothesis using an urn analogy. He likened the process of technological invention to drawing balls from an urn where the color of balls represents their impact. White balls are beneficial and constitute most of the balls drawn from the urn. Some balls are gray, which represent technologies with mixed or moderate effects. Black balls represent hypothetical technologies that tend to destroy by default the civilization that invents it. According to Bostrom, it is largely due to luck that humanity hasn't encountered a black ball yet, rather than carefulness or wisdom.8

Bostrom defined the vulnerable world hypothesis as the possibility that "If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition."9 except in some specific cases.10 The "semi-anarchic default condition" refers here to having:1112

  1. Limited capacity for preventive policing.
  2. Limited capacity for global governance.
  3. Actors with diverse motivations13

Types of vulnerabilities

To exemplify the vulnerabilities, Bostrom proposed a classification system and gave examples of how technology could have gone wrong, and policy recommendations such as differential technological development.1415 If a technology that entails such a vulnerability is developed, the solutions supposed to be needed to survive (i.e. effective global governance or preventive policing depending on the type of vulnerability) are controversial.161718 The classification includes:1920

  • Type 0 ("surprising strangelets") : a technology carries a hidden risk and inadvertently devastates the civilization.

A proposed hypothetical example of this is if nuclear bombs had been able to ignite the atmosphere. Nuclear ignition was predicted not to occur for the Trinity nuclear test in a report commissioned by Robert Oppenheimer. But the report has been deemed shaky given the potential consequences : "One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable."21

  • Type 1 ("easy nukes") : a technology gives small groups of people the ability to cause mass destruction.

The "easy nukes" thought experiment proposed by Nick Bostrom opens the question of what would have happened if nuclear chain reactions had been easier to produce, for example by "sending an electric current through a metal object placed between two sheets of glass."22

  • Type 2a ("safe first strike") : a technology has the potential to devastate the civilization, and powerful actors are incentivized to use it, potentially because using it first seems to bring an advantage, or because of some tragedy of the commons scenario.
  • Type 2b ("worse global warming") : a great many actors face incentives to take some slightly damaging action such that the combined effect of those actions is civilizational devastation.

Mitigation

According to Bostrom, pausing the technological progress may not be possible or desirable. An alternative would be to prioritize the technologies that are expected to have a positive impact, and to delay those that may be catastrophic, a principle called differential technological development.23

The potential solutions varies depending on the type of vulnerability. Dealing with type-2 vulnerabilities may require a very effective governance and international cooperation. For type-1 vulnerabilities, if mass-destruction ever gets accessible to individuals, there may be at least some small fraction of the population that would use it.24 In extreme cases, mass surveillance might be required to avoid the destruction of civilization, a controversial prospect that received significant media coverage.2526272829

Technologies that have been proposed as potential vulnerabilities are advanced artificial intelligence, nanotechnology and synthetic biology (synthetic biology may give the ability to easily create enhanced pandemics).30313233

Footnotes

References

  1. Bostrom, Nick (November 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718. /wiki/Global_Policy

  2. Bilton, Nick (2018-11-28). "The "Black Ball" Hypothesis: Is Gene Editing More Dangerous Than Nuclear Weapons?". Vanity Fair. Retrieved 2023-11-07. https://www.vanityfair.com/news/2018/11/is-gene-editing-more-dangerous-than-nuclear-weapons

  3. Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28. https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/

  4. Bostrom, Nick (November 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718. /wiki/Global_Policy

  5. Bostrom, Nick; van der Merwe, Matthew (2022). "None of our technologies has managed to destroy humanity – yet". Aeon. https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yet

  6. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  7. Finley, Klint. "Technology That Could End Humanity—and How to Stop It". Wired. ISSN 1059-1028. Retrieved 2023-11-07. https://www.wired.com/story/technology-could-end-humanity-how-stop-it/

  8. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  9. Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28. https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/

  10. It depends, according to Nick Bostrom, on whether society is in a "semi-anarchic default condition" (see § Definitions).

  11. Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28. https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/

  12. "Notes on the Vulnerable World Hypothesis". michaelnotebook.com. https://michaelnotebook.com/vwh/index.html

  13. And in particular, the motivation of at least some small fraction of the population to destroy the civilization even at a personal cost. According to Bostrom : “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”[5]

  14. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  15. Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28. https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/

  16. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  17. Finley, Klint. "Technology That Could End Humanity—and How to Stop It". Wired. ISSN 1059-1028. Retrieved 2023-11-07. https://www.wired.com/story/technology-could-end-humanity-how-stop-it/

  18. "How to Protect Humanity From the Invention That Inadvertently Kills Us All". Inverse. 2019-04-18. Retrieved 2023-11-07. https://www.inverse.com/article/55024-nick-bostrom-how-to-protect-humans-from-the-invention-that-will-kill-us

  19. Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28. https://analyticsindiamag.com/ai-doomsday-can-be-avoided-if-we-establish-world-government-nick-bostrom/

  20. Bostrom, Nick (November 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718. /wiki/Global_Policy

  21. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  22. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  23. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  24. Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28. https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks

  25. Houser, Kristin (19 April 2019). "Professor: Total surveillance is the only way to save humanity". Futurism. Retrieved 2023-05-28. https://futurism.com/simulation-mass-surveillance-save-humanity

  26. Bendix, Aria. "An Oxford philosopher who's inspired Elon Musk thinks mass surveillance might be the only way to save humanity from doom". Business Insider. Retrieved 2023-05-28. https://www.businessinsider.com/nick-bostrom-mass-surveillance-could-save-humanity-2019-4

  27. Taggart, Dagny (2019-04-24). "Global Government and Surveillance May Be Needed to Save Humanity". The Organic Prepper. Retrieved 2023-10-16. https://www.theorganicprepper.com/global-government-and-mass-surveillance-may-be-needed-to-save-humanity-expert-says/

  28. Gheorghe, Ana (2019-04-27). "Mass surveillance could save us from extinction, claims Professor". Cherwell. Retrieved 2023-10-16. https://www.cherwell.org/2019/04/27/mass-surveillance-could-save-us-from-extinction-claims-professor/

  29. "None of our technologies has managed to destroy humanity – yet". Aeon. 12 February 2021. Retrieved 2023-05-28. https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yet

  30. Walsh, Bryan (July 15, 2020). "The dire lessons of the first nuclear bomb test". Axios. https://www.axios.com/2020/07/15/75th-anniversary-trinity-nuclear-test-technology

  31. Bilton, Nick (2018-11-28). "The "Black Ball" Hypothesis: Is Gene Editing More Dangerous Than Nuclear Weapons?". Vanity Fair. Retrieved 2023-11-07. https://www.vanityfair.com/news/2018/11/is-gene-editing-more-dangerous-than-nuclear-weapons

  32. Torres, Phil (2019-10-21). "Omniviolence Is Coming and the World Isn't Ready". Nautilus. Retrieved 2023-05-29. https://nautil.us/omniviolence-is-coming-and-the-world-isnt-ready-237586/

  33. "AI-Powered Malware Holds Potential For Extreme Consequences - Could Artificial Intelligence Be a Black Ball From the Urn of Creativity?". Zvelo. 2023-04-26. Retrieved 2023-11-07. https://zvelo.com/ai-powered-malware-holds-potential-for-extreme-consequences/