Boeken

Wednesday, April 22, 2015

More afraid of stupid people than of smart machines

Here is the column that I have read yesterday at a debate in De Balie about the chances and risks of artificial intelligence, on request of the technology magazine De Ingenieur.

I have also written a Dutch version, which can be read here.

This is the flattering reaction on Twitter of Frank van Harmelen, professor in Knowledge Representation & Reasoning at the Free University in Amsterdam.



Okay, here's my column:

“Technology has given life the opportunity to flourish like never before...or to self-destruct.”

This is not a sentence that I have made-up myself. These are the swollen words of the Future of Life institute. The institute investigates the impact of the development of superintelligent computers and robots. In an open letter the institute calls for the development of artificial intelligence whose positive impact is maximized and whose negative impact is minimized. Who would not want that?

There is only one problem: any technology can be used both for the good and for the bad. With a knife, you can cut bread, or someone’s throat. A robot plane that kills terrorists, can in the hands of terrorists just as easily kill innocent civilians. Nothing human will be alien to the robot of the future.

The international media have interpreted the open letter as a warning against superintelligence. Of course. For decades the media have enthusiastically covered predictions about computers and robots outsmarting and eventually subduing humans.

However, the reality of today and also of the coming decades, is much less sexy. Let me quote Pedro Domingos, an American top researcher in the field of artificial intelligence. He says, and I quote: “Everybody’s so worried about computers becoming really, really smart and taking over the world, whereas in reality what’s happened right now is computers are really, really dumb and they’ve taken over the world. The world cannot function without computers anymore. It would be better if they were smarter.” End of quote.

One of the founders of the Future of Life institute, Victoria Krakovna, reacted surprised about the one-sided focus of the media on superintelligence. What a surprise! That’s what you get when you let the whole world know that you investigate artificial intelligence in the context of − and I quote again − “existential risks facing humanity”.

The Future of Life institute should emphasize that all intelligent systems are still a collaboration between human and artificial intelligence. But yeah, that sounds a lot less sexy.

In this cooperation the smart machine will do tasks in which it excels: rapidly processing information, infallible memory and never getting tired. Humans will do the things they do much better than the machine: understanding the context, understanding intentions and emotions, using creativity, developing ethical norms and values.

The cooperation between man and machine can be a matter of life or death. Let me illustrate this with two examples.

Two weeks ago it was disclosed that on December 14, 2014 the automatic pilot of a Scottish plane of the company Loganair had directed the plane into an almost fatal crash. The plane was hit by lightning. The autopilot ran crazy and steered the plane into a steep descent. The autopilot even blocked the quick intervention of the human pilot. Barely twenty seconds before the plane would have crashed, the human pilot managed to pull up the plane in a final attempt.

Much less fortunate were the passengers of the Turkish Airlines plane that flew to Amsterdam on February 25, 2009. As the plane approached Schiphol Airport and flew at an altitude of six hundred meters, the altimeter suddenly showed minus two meters. The automatic pilot concluded that the aircraft had already landed and rapidly diminished the engine power. The human pilots understood this mistake too late and the plane crashed near Schiphol. There were 9 deaths and 120 injured.

These are two examples of the Automation Paradox. The Automation Paradox states that the more automation, the more crucial is the human intervention in case the thinking machine makes a mistake. And there is always a chance for a mistake.

With increasing automation, we will increasingly hit against this paradox. Automation shifts the point where mistakes are made: for example, from operators, pilots and drivers to the programmers who write the software and to the interaction between human and machine.

The Automation Paradox has two causes. First, people tend to trust machines more than their own common sense. This can lead to an over-reliance on automation. Second, more automation means less practice for human operators. This increases the risk on an accident in case the human has to correct the machine, just like in the accident with the Turkish Airlines plane.

To tackle the paradox, we need to consider humans as an integral part of any artificially intelligent system. And to ensure that operators remain sufficiently trained in their human skills, we occasionally need to switch off automated systems during training sessions.

According to the open letter of the Future of Life institute, artificial intelligence can contribute to the worldwide eradication of poverty and disease. Unfortunately, non-technological problems are never solely solved by technology. Smart computers and robots will only make mankind more successful when we put humans and not machines in the heart of our thinking. Our challenge for the future is how to combine the best of two different worlds: the best of human and the best of artificial intelligence. In this effort we should be more afraid of stupid people than of smart machines.