h1

I am not a singularitarian

Thursday, 5 August, 2010

My blog is specifically about human enhancement and the ethical/political debates about it. These two topics are the domain of transhumanism (idealogy supporting improving humans with technology). It has very little to do with singularitarianism (idealogy supporting creating a superintelligence), and I do not describe myself as a singularitarian.

Futurist Eliezer Yudkowsky has defined four properties of a singularitarian:

  1. A Singularitarian believes that the Singularity is possible and desirable.
  2. A Singularitarian actually works to bring about the Singularity.
  3. A Singularitarian views the Singularity as an entirely secular, non-mystical process — not the culmination of any form of religious prophecy or destiny.
  4. A Singularitarian believes the Singularity should benefit the entire world, and should not be a means to benefit any specific individual or group.

I fall short on point 1, because though I think the singularity is technically possible, I don’t think it’s probable or desirable.

The singularity is defined as a point in time where the future is inherently unpredictable because a smarter-than-human intelligence has appeared and we, currently being human, cannot predict anything beyond that. This point might involve exponential changes happening so fast we can’t keep up or might just be a superintelligence doing something so smart we can’t work it out, but the main point is this unpredictability or discontinuity. Hence the name, borrowed from the singularity of a black hole beyond which nothing can be observed.

I think the most likely path to anything even remotely resembling a singularity is by increasing human intelligence to transhuman intelligence (and eventually to posthuman intelligence). But I think this will be a rather slow change, with diminishing returns (it might at first be easy to upgrade the human brain, but eking more smarts out of it will get harder and harder as we do, thanks to the limits of the biological systems). It isn’t that I don’t think exponential growth can happen, it’s just that it always hit a wall and paradigm shifts rarely occur just in time to keep up the changes, no matter what Raymond Kurzweil might think. So I think intelligence will likely linger at some pseudo-maximum value for a while, just as it’s been lingering at the human IQ for quite some time. So I think there will be slow changes and not a singularity.

I also don’t think this change will have anything to do with artificial intelligence. Not because artificial intelligence is impossible, but because I think by the time significant artificial intelligence can be created, it will be possible to merge human minds with machine minds, thereby blurring any distinction between artificial and human intelligence. I think humans are too greedy to let a machine outsmart them, especially in a way that defies possibility of prediction.

Not only do I think the singularity is probably not going to happen, I also think there’s a good chance it can never happen (and a chance I could be wrong too). After all, it’s possible that a brain can never be smart enough to fully understand itself, and making that brain smarter just makes it more incomprehensible. We don’t really know the limits.

Furthermore, I think it’s undesirable to ever seek a singularity. This is obvious in the very definition of it, which necessitates  an inherently unpredictable leap in intelligence. We shouldn’t do something if we have a good reason to suspect something bad might result, and I think being unable to know anything about the results is a good enough reason to suspect they might be bad. And if it is bad, we won’t even be able to fix it.

So I say, the singularity won’t happen. It’s unlikely to happen anyway, and even if it can, we should stop it. We should move forward carefully and cautiously, and indeed I think this is likely to be how it will happen anyway. We will slowly make ourselves smarter, and with our newly enhanced brains, analyse the future. The horse won’t bolt, because we’ll have our hands firmly on the reins (and besides it’s not a horse, it’s a snail).

(This post was written because I’ve been requested to write up about the Singularity Summit.  The program features some very interesting pieces, most of which fall broadly under the realm of transhumanism rather than the narrow and misguided realm of singularitarianism)

About these ads

3 comments

  1. Hi Josh,

    Are you familiar with the arguments for hard takeoff? Most of them revolve around the deep differences between human and AI thinkers. Here’s a summary:

    http://www.acceleratingfuture.com/articles/relativeadvantages.htm


  2. >We shouldn’t do something if we have a good reason to suspect something bad might result

    Surely you see that this is just the precautionary principle, so often misused against transhumanist technologies.

    The proper question is how likely the good and bad results are, and to what extent the good outweighs the bad. In the opinion of folks like Yudkowsky, there is too much at stake to not create a superintelligence, and doing so within certain constraints has positive expected utility. You don’t have to believe him, but he will show you his math.


    • You’re quite right, it’s essentially the same argument. The singularity is, however, defined as something that we can’t predict. Transhumanist technologies can, at least in the short term, be predicted and assessed on this basis.

      And besides, my main argument is the singularity won’t happen. It’s just an aside that I think will also be for the better.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 40 other followers

%d bloggers like this: