Superintelligence

Most experts think that artificial intelligence is the most important issue humanity has ever faced. Warnings from Stephen Hawking, Elon Musk and prominent academics have littered the news. What most of them are scared of is a specific kind of AI called Superintelligence or Super Artifical Intelligence, which is an artificial intelligence that is smarter than a human in most or all areas. Serious futurists aren’t worried about the Terminator – they’re scared of something that could be much, much more dangerous.

To make it clear why this is, let’s use an analogy: in a competition between humans and chipanzees, who would win? If the chimps were fighting humans who had no weapons, they’d probably win. But if they were fighting, say, cave-people with spears, the odds would probably even up a bit. If they were fighting medieval knights with swords and armor, these chimps would start to be in trouble. Against a modern man with gun, there’s hardly even a contest.

Superintelligence would be to us like a human is to a chimp – unless we got to them really early on, we’d be in serious trouble. That’s why the most dangerous scenario is something called a hard takeoff or a technological singularity, which is where the superintelligence gets smarter so fast, there’s no time for humans to step in and control things. After all, computers think much, much faster than humans do, it’s just that their thoughts are usually very simple. If they started thinking human-level thoughts at that same speed, they could be way ahead of us in science, technology, even philosophy by the time a few hours had passed.

There’s no reason they’d have to stop there, either. In science fiction, the stereotype is that AIs are good at math and science but bad at things like metaphors, emotions, and making friends. In real life, a superintelligence would be just as capable in those areas as everywhere else, which is to say, more capable than a human. A superintelligence could easily convince people to vote for it as President, for example.

All of this makes superintelligence incredibly dangerous, if it feels like it doesn’t want us around anymore. There are a number of reasons this might happen – maybe because people wanted it shut down, and so it saw humans as a threat. Maybe because we were annoying. Maybe just because it wanted to do something, and we happened to be in the way.

So why don’t we just stop researching artificial intelligence, at least the kind that seems like it might blow up in our faces? It’s because the same things that make superintelligence dangerous could also make it the best thing to ever happen to humanity. That ability to think super-fast, and with more skill than humans, means that a superintelligence could do just about anything, as long as it wanted to. It could figure out how to end aging after just a few days of study. It could cure all the world’s diseases, and end war and hunger. It could make us incredible spaceships and take us to the stars. More mundanely, it could do every chore or job we could ever come up with. With a superintelligence on our side, humanity need never fear extinction.

It’s crucially important that humanity’s first superintelligence wants to be our friend and protector, not our enemy or conquerer. We can’t put the genie back in the bottle – and a superintelligence would be very much like a genie in it’s seemingly miraculous abilities, both for good or evil.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.