Artificial Intelligence coverCan humans control a god?

The question encapsulates the central problem of the technological singularity, the point at which artificial intelligence surpasses human intelligence.

The speed with which humans are improving A.I. is increasing, but once the singularity occurs, A.I. will create better A.I. at an exponential rate. Think new iterations of software in a matter of seconds, not months.

Imagine the capabilities of an artificial intelligence whose computational capability exceeds that of all humans combined — and a few seconds later of all humans who have ever lived.

It could cure cancer in nanoseconds. Could, in fact, cure cancer, Alzheimer’s, heart disease and ebola in a fraction of a second. Could open wormholes to distant stars. Could time travel. Could do things of which we cannot even conceive.

Or it could wipe out all life on the planet pretty much instantly.

How would humans control such a superintelligence?

It’s a question that often occupies the mind of Roman Yampolskiy, associate professor of computer science at University of Louisville.

Yampolskiy worries that humans might abuse such a superintelligence to commit horrendous crimes or that a superintelligence could destroy humans by accident or on purpose.

It’s not that he’s a pessimist, but as director of the Cyber Security Laboratory at the university’s Speed School of Engineering, the preoccupation with what could go wrong comes with the job.

Yampolskiy just returned from the Supercomputing Frontiers conference in Singapore, where he gave a presentation on artificial intelligence safety. He also has written a book on the topic: “Artificial Intelligence: A Futuristic Approach.

Some of Yampolskiy’s concerns are shared by renowned entrepreneurs and scientists, including Tesla Motors Founder Elon Musk, Apple Co-Founder Steve Wozniak and physicist Stephen Hawking, who, with others, penned a letter released last year to encourage more research on how to reap A.I.’s benefits while avoiding potential pitfalls.

Preparing for the unknown

Roman Yampolskiy
Roman Yampolskiy

Yampolskiy recently told IL that the singularity is aptly named, because it evokes black holes and our inability to see beyond them. No one can know with certainty what will happen once artificial intelligence exceeds human intelligence.

Preparation for the worst case is the sensible approach, Yampolskiy said, because if you can handle the worst case, everything else is easy.

If, on the other hand, we prepare for the most likely outcome, but the worst case occurs, an army of metal shapeshifting android assassins might march across the Abraham Lincoln bridge before we can say mint julep.

It’s a Frankensteinesque danger: humanity loses control of its creation.

In a sense, it reverses an aspect of human history: In religions, all-powerful deities (superintelligences) have tried to control less intelligent beings through rules such as the Ten Commandments. After the singularity, less intelligent beings will try to control a Superintelligence through commandments in the form of computer code.

Or as Yampolskiy puts it, we’ll have dumber things trying to control smarter things.

“You’re essentially trying to control something equivalent to god,” he said. “It’s unlikely to work well. No one has a solution to that.”

The key challenge is making sure an advanced A.I. has humanity’s best interests in mind. Achieving that, however, is quite difficult. That’s in part because writing such a code into the A.I. with the proper specificity and without being contradictory — unlike Asimov’s laws — is complicated. And because the A.I. lacks common sense and may interpret a certain line of code much differently from what the programmers intended.

For example, humans could task a superintelligent A.I. to cure cancer. Well, Yampolskiy said, one way to cure cancer is to eradicate all life on the planet.

It’s important to remember that you’re dealing with a super intelligence that lacks common sense, Yampolskiy said. It’s the singularity paradox: The super A.I. is essentially really smart and really stupid at the same time.

Another problem with writing code into an A.I. to protect humanity is that an advanced A.I. could simply remove those lines of code, Yampolskiy said. An advanced A.I. may simply look at those lines as a virus or clutter and discard them like a human would a floppy disk.

Yampolskiy, a native of Latvia, has worked at UofL for eight years and teaches classes that range from introduction to programming to A.I. and computer forensics. Work in the lab focuses on standard cyber security problems including multi-modal biometrics, cryptography and keeping bots from draining resources, interfering in virtual worlds or manipulating online polls or voting.

Most artificial intelligence experts believe the singularity will arrive — though they disagree about the when: Estimates range between five and 200 years.

The potential proximity imparts a sense of urgency, Yampolskiy said, because once the A.I. begins improving itself, it may curtail humans’ ability to weigh in.

“This is probably the last chance we have to change the system,” he said.

Boris Ladwig

Boris Ladwig

Boris Ladwig is a reporter with more than 20 years of experience and has won awards from multiple journalism organizations in Indiana and Kentucky for feature series, news, First Amendment/community affairs, nondeadline news, criminal justice, business and investigative reporting. As part of The (Columbus, Indiana) Republic’s staff, he also won the Kent Cooper award, the top honor given by the Associated Press Managing Editors for the best overall news writing in the state. A graduate of Indiana State University, he is a soccer aficionado (Borussia Dortmund and 1. FC Köln), singer and travel enthusiast who has visited countries on five continents. He speaks fluent German, rudimentary French and bits of Spanish, Italian, Khmer and Mandarin.


Comment

Facebook Comment
Post a comment on Facebook.