Don't Panic!

February 25th, 2013

I'm deeply indebted to Charlie Stross for bringing the concept of Roko's basilisk to my attention. Well, either that, or damned forever to have an avatar of my presumably long-dead self tormented by a vengeful AI for failing to believe in it. We'll see:

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. The proposition, and the dilemma it presents, somewhat resembles a futurist version of Pascal's wager.

[...]

The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person (e.g. by mind uploading), which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

Wow. Just wow…

[Via Charlie's Diary]

This entry was posted on Monday, February 25th, 2013 at 21:37. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

Comments are closed.