Based on our current understanding of the universe, weapons, disease, climate change, and meteors pose the primary existential threats to humanity. These are the direct means by which humanity could be wiped out. Artificial intelligence technologies (and, more generally, software) do not pose a direct threat. But how might they lead indirectly to a doomsday scenario?
First, humanity might decide to place weapons or biotechnology under automated software control. Programming errors or unanticipated conditions might lead those automated systems to make mistakes. For weapons systems, the threats range from automated launch on warning of nuclear weapons to massive autonomous swarms of anti-personnel weapons. For biotechnology, the possibilities of large-scale automated scientific discovery might lead to the implementation of automated methods for modifying living organisms or synthesizing synthetic organisms that result in a terminal pandemic. Even without full autonomy, programming errors could cause human overseers to misinterpret the unfolding process and fail to intervene in time.
Second, AI systems might mediate communication between people in such a way that we make poor decisions. For example, people might decide to launch nuclear or biological attacks because they do not understand the full consequences of these actions. If belief in the inevitability of mutual assured destruction is eliminated, total annihilation could follow. Similar misinformation or disinformation could encourage the development and deliberate release of deadly organisms. We are already witnessing AI-assisted disinformation campaigns on social media that aim to prevent the reduction of greenhouse gas emissions or delay the adoption of appropriate countermeasures to halt a global pandemic.
At the heart of these threats are issues of scale. Autonomous weapons do not currently exist in large numbers, but military planners are pursuing ever-shorter automated decision cycles (scaling in time) that will make it impossible for humans to intervene. Similarly, swarms and biotechnology are examples of exponential scaling of the threat vectors to the point where they escape human control. If these vectors are also able to adapt (e.g., via learning or evolution), they will undoubtedly exceed our control. I believe that a combination of international regulations and safety practices can greatly reduce--but not eliminate--these threats.
For social media, we already see that the scale of networks such as Facebook and Twitter make it impossible for humans to control the real-time propagation of false information world-wide. I do not have direct technical expertise in this area, but I do not see any plausible technical solution to this scaling problem. Perhaps we can develop AI technology for automated moderation of such media, but initial efforts in this direction have so far not been promising. Meanwhile, AI tools for creating disinformation at scale are already being applied.