Don't Panic!

February 25th, 2013

I'm deeply indebted to Charlie Stross for bringing the concept of Roko's basilisk to my attention. Well, either that, or damned forever to have an avatar of my presumably long-dead self tormented by a vengeful AI for failing to believe in it. We'll see:

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. The proposition, and the dilemma it presents, somewhat resembles a futurist version of Pascal's wager.

[…]

The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person (e.g. by mind uploading), which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

Wow. Just wow…

[Via Charlie's Diary]

Comments Off

Spam, spam, spam, spam

December 23rd, 2010

Dresden Codak » Fabulous Prizes.

[Via Nestor, posting this comment on Charlie Stross's post about The Spamularity]

Comments Off

Living among us, but not like us

December 11th, 2010

My favourite response to Charlie Stross's musings about corporations as an alien life form came from Tim Maly, commenting at Snarkmarket:

Yeah, I kept reading this and thinking, "What if the Singularity already happened?" "What if our ideas about what constitutes intelligence render us incapable of recognising it when it appears?"

How do bacteria revolt against the rise of humans? They don't. They just go on, trying to survive in a new environment that's changed for various reasons that they don't quite understand. […]

Comments Off

I wonder…

February 18th, 2010

Comment of the day (in response to a discussion about The Singularity):

With apologies to DNA, "There's something big and physically binding coming at me, it needs a fancy sounding name, thermo … thermodynamics. I wonder if it wants to be friends."
posted by overyield at 10:47 AM on February 18

Comments Off

Nine years to go

June 18th, 2006

Avram Grumer has found firm evidence that the Singularity will be upon us sooner than you'd think.

(As someone who hasn't been clean-shaven for more than a few months at a time in the last two decades presumably I'm destined to be Left Behind after the rest of you upload yourselves into your AI-powered, nanotech-driven razors.)

[Via Majikthise, via Pharyngula]

Comments Off