Artificial intelligence – Man and machine in new harmony?
Hochreiter: To me, it constitutes a machine with cognitive abilities which are generally ascribed to humans. This is also how John McCarthy defined it in 1956. Such a machine perceives the world, plans and draws logical conclusions to achieve an objective.
Ruskowski: In my view, artificial intelligence primarily comprises a group of special mathematical procedures. These facilitate, among other things, the recognition of patterns, machine learning processes and also the interaction of a machine with its environment in the sense described by Mr. Hochreiter. But the original definition by McCarthy goes too far, in my opinion.
Your definition, Professor Hochreiter, sounds as if the aim of AI is to create a “new” human being.
Hochreiter: A machine that looks like a human being but is not a human would give rise to too many ethical issues. It would be foolish. Rather, we should develop something that is capable of providing the best possible support.
Ruskowski: I agree that replicating humans makes no sense. We must distinguish, there are two approaches to AI: strong AI, which aims to understand how humans function, and weak AI, which expands the abilities of humans. The second approach is what we are pursuing at DFKI, the German Research Center for Artificial Intelligence, for example in terms of using AI for production. To us, AI is like an adapter between the digital and human worlds. We humans are not that great at processing large amounts of data, for instance. AI can handle the preprocessing, allowing us to make better and faster decisions on that basis.
Hochreiter: I don’t think that is ultimately the goal. Strong AI is the more interesting route in my opinion. We are striving to build complete systems that can see, hear, understand and handle objects. These universal systems can then be specialized: one cooks, another cleans and a third works as a roofer. Facebook, Google and Amazon are already working on such universal systems in their own fields, and so are automakers for the purpose of recognizing the environment for autonomous driving.
Ruskowski: I don’t think that one system for everything is the way to go. I believe in diversification with many small systems, from which we pick specialized abilities that can then be combined on a modular basis. What concerns me about your approach is that a single player could monopolize the situation. A system like Facebook would have even more of an influence on our entire life.
Regardless of what approach we pursue, do you believe that AI will replace human workers?
Ruskowski: No. Look at Germany, for example: we have used automation for decades and still have full employment in many sectors. Even back in the 1980s, engineers dreamed of factories without people. It has never come true – thankfully. But why? Some think that AI is inexpensive or even free, but that’s just not the case. These systems are so complex that their development entails considerable costs. This is the reason why AI will not readily replace humans in factories. Quite the contrary: by using these technologies, we can even bring industrial production back to Europe. This is assuming that we are successful in further developing the automation process: to date, humans have had to adapt to the machines. For the first time, AI makes it possible for automation to adapt to humans, meaning that machinery becomes a useful tool and attendant.
Hochreiter: I also believe that AI will yield more work, not less. Machines are becoming more intelligent and will be able to talk to humans. A lathe will be able to tell you that the lubricant is too hot or the chips are too long. Relieved of tedious tasks, humans will become teachers or instructors, and will only have to deal with exceptions, which are much more interesting.
Ruskowski: But we must not forget to involve employees and works councils from the beginning when such systems are introduced. The transformation has to happen from the bottom up, rather than being dictated from the top down. One important issue is transparency, for example, when it comes to monitoring technology.
Hochreiter: I agree, the user must understand what is happening, thus enabling him or her to actively participate.
Movies often portray artificial intelligence as menacing – is there any truth to that?
Hochreiter: It is true that many people are afraid of losing control, which would be virtually tantamount to domination by machines. But why would AI want to dominate humans when it requires different resources and habitats than humans, who are bound to the biosphere? Another argument against such gloomy scenarios is the fact that humans exercise selective pressure on AI, meaning that only useful and helpful AI will be further developed and reproduced. I am optimistic that this pressure will encourage us to develop productive AI.
Ruskowski: Especially because many of the scenarios that are presented to us at the movies are so absurd, it is important for us to define clearly in reality what AI is and what it isn’t. This includes learning how to use it in a responsible manner and repeatedly explaining how AI works.
Hochreiter: Exactly. Why do we demand 100 percent perfection from AI from the start, but not from humans? Let’s take autonomous driving, for example. Before humans are allowed to drive, they have to obtain a driver’s license. Similarly, a machine must first demonstrate that it can accomplish everything that it is asked to do, and that it can do so with adequate safety and resilience. But it is not liable to make certain mistakes. For example, it does not get tired and is not distracted by a cellphone ringing. In the long term there will be significantly fewer accidents thanks to AI.
How should we, the economy and society, get ready for AI?
Hochreiter: In terms of the legal framework, we need to catch up quickly. At the moment, there are many individual stories that are circulating in the press regarding things that can go wrong. I have had discussions with criminal lawyers. The circle of people involved with AI products has grown. This raises new questions: whose fault is it if something goes wrong when using a machine? The programmer, the sensor manufacturer or the people who have trained the system and fed it the data? We have to come to terms with the fact that we will not be able to pre-program all conceivable eventualities. After all, the very characteristic of AI systems is that they learn from perceiving their environment.
Ruskowski: We also need to make significant strides in education and training. Our current curricula are still very analog-based and humanistic, for example. Not that there is anything wrong with that, but they must be expanded. I have noticed that schools in many cases brush aside the digital world. As a result, young people are insufficiently prepared to responsibly handle digital opportunities such as AI.
Artificial intelligence and robotics
To an increasing extent, robots are no longer being used only in strictly organized production cells, separated by safety fencing, but are collaborating with humans in production operations or even have to find their way around a “chaotic” environment in domestic settings. Artificial intelligence helps robots adapt to such uncertainties and variability. Until now, robots have been repetitive, working with consistent precision and high repeatability. The requirements of the future are much more diverse, however. If robots are to handle everyday tasks, for example, they must be able to react flexibly to events in their surroundings. To achieve this, KUKA is using machine learning, a subdomain of AI. This involves learning procedures on the basis of examples instead of explicitly programming fixed sequences. But the same still applies: human experience and human creativity are ultimately irreplaceable.
Source header picture: iStock/Wenjie Dong