What is Roko's Basilisk? The Terrifying AI Thought Experiment Explained

Published on
what is roko's basilisk

Artificial intelligence is rapidly transforming the tech world and impacting our daily lives. From facial recognition software to self-driving cars, the technology's tendrils are reaching everywhere. But what if, amid this surge of innovation, a new kind of threat emerges?

Enter Roko's Basilisk, a thought experiment that explores the unsettling possibility of an all-powerful AI and the existential dread it could inspire.

This article tells you everything you need to know about Roko's Basilisk, including what it is, what it means, and whether its findings could become reality one day. 

What is Roko’s Basilisk?

Roko’s Basilisk is a thought experiment that explores a potentially dystopian future where an artificial superintelligence might be motivated to create a virtual reality simulation. This simulation would be used to torture humans who knew of the AI’s existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.

It can be thought of as a modernized, simplified version of Pascal’s Wager. This 17th-century thought experiment proposes that believing in God is a rational choice, even if the existence of God is uncertain. The potential reward of eternal paradise outweighs the potential cost missing out on earthly pleasures.

The concept of Roko’s Basilisk emerged in 2010 on a forum called LessWrong, a community blog focused on rationality and artificial intelligence. The blog was founded in 2009 by artificial intelligence researcher Eliezer Yudkowsky. Yudkowsky was already a prominent figure in the field of "friendly AI” who introduced the concepts of Coherent Extrapolated Volition (CEV) and Timeless Decision Theory (TDT) in his work at the Machine Intelligence Research Institute.

 A user named Roko first presented the thought experiment in a blog post titled “Solutions to the Altruist's burden: the Quantum Billionaire Trick"’. In the thought experience, Roko incorporates ideas like "timeless decision theory" from Yudkowsky, alongside elements of game theory, specifically the "prisoner's dilemma." In mythology, a basilisk is a legendary serpent with a deadly gaze. Anyone who looked upon it would be petrified or killed.

The theory's premise, that simply knowing about Roko's Basilisk could make you a target for punishment by the future superintelligence, reportedly caused some users to experience panic attacks. This paradoxical element – knowledge itself becoming a potential risk – is what makes the thought experiment so unsettling.

Though this was upsetting to some, fundamentally this is not dissimilar to playground mind game concepts like “The Game”, the premise of which is to avoid thinking about "The Game" itself. Any mention of it triggers a "loss," and the game spreads through casual conversation. While not harmful, it can be mentally intrusive. The constant effort to suppress the thought can turn it into an annoying loop, making The Game both humorous and subtly frustrating.

is rokos basilisk real

Yudkowsky himself rejected the concept commenting “I don't usually talk like this, but I'm going to make an exception for this case. Listen to me very closely, you idiot. YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. [...]

You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.” his comment read.

Yudkowsky then opted to ban discussions of the topic on LessWrong entirely for five years. This decision, however, backfired. Likely due to the Streisand Effect (where attempts to suppress information inadvertently draw more attention to it), the ban resulted in significantly more exposure for Rokos Basilisk than before. 

Is Roko’s Basilisk Real?

No, Roko's Basilisk is not real. It's a thought experiment, a hypothetical scenario designed to explore the potential risks and philosophical implications of superintelligent AI.

The core concept of the experiment relies on the existence of a superintelligence, an AI far surpassing human capabilities. While AI research is making significant progress, there's no guarantee or timeline for achieving such a level of intelligence.

The concept of punishing those who didn't contribute to its creation presents a logical paradox. How could an AI punish someone before it even exists?

Interestingly, the originator of Roko's Basilisk, later expressed regret about introducing the concept. He even blamed the LessWrong forum for planting the ideas that lead to the basilisk in his mind. This highlights the potential psychological impact of such thought experiments.

So, while Roko's Basilisk may not be a real threat, it serves as a reminder to approach AI development with caution and foresight. It also underscores the importance of responsible exploration of complex ideas, even within thought experiments.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now