Post_Header2

BY

“AI will always remain a karaoke machine of the human mind.”

An interview with Andrian Kreye, editor of the Feuilleton at the Sueddeutsche Zeitung

The incursion of intelligent machines into various aspects of business and society portends certain implications that demand constructive deliberation. What are the consequences of the rise of intelligent? And how can they be governed?

The following interview was conducted Alonge Olusegun and Dr. Laura Bechthold as part of the Future of Leadership Initative’s (FLI)’s project titled “Yes, Mr Robot! Leadership and the Rise of Intelligent Machine”.

WHAT ARE THE MOST DISRUPTIVE DEVELOPMENTS WHEN IT COMES TO THE RISE OF MACHINE INTELLIGENCE?

There were two important milestones in recent years. One is that we now have the ability to store and process big data. And second, the increase in processor speed. Today, we are at a tipping point when AI is possible in very new ways. To date, AI was more a research topic. Now it has become market ready. Further, machines start making their own decisions. They can learn and improve themselves without the input of man. Machines are not just a tool anymore, but they become companions. That is something new for humans because we have to learn to deal with this new being. For me, the emancipation of the machine from human input was illustrated by the introduction of Alpha Go Zero. While the algorithm behind the first version (Alpha Go) was fed with data from millions of games by humans, Alpha Go Zero just got taught the abstract rules of the game. Then, the machine played against itself for two or three days and won against the first version, Alpha Go.

ABOUT ANDRIAN KREYE

Since 2007 Andrian Kreye works as editor of the Feuilleton at the Sueddeutsche Zeitung. From 1988 to 2006 he lived in New York, working as a correspondent for Sueddeutsche Zeitung, Frankfurter Allgemeine Magazin and Tempo. During that time, he also worked extensively in Latin America, Africa, Asia and the Middle East. Andrian has published several award-winning documentaries and is author of several books, one of them on artificial intelligence: “Macht Euch die Maschinen untertan: Vom Umgang mit künstlicher Intelligenz.”

[Authors’ note: Alpha Go was developed by Alphabet Inc.‘s Google DeepMind in London. Alpha Go Zero quickly developed Go skills beyond human capabilities, winning 100–0 against the previously published, champion-defeating AlphaGo. An article about the learning process of Alpha Go Zero was published in Nature in 2017.] (https://www.nature.com/articles/nature24270)

IMAGINE 10 YEARS FROM NOW: WHAT IMAGES AND MOST SIGNIFICANT USE CASES COME TO YOUR MIND WHERE OUR SOCIETY COULD BENEFIT FROM ARTIFICIAL INTELLIGENCE? WHAT COULD BE BETTER THAN TODAY?

In every field where humans reach a limit, AI can be beneficial. I think healthcare is the sector where AI will bring the most benefits for humankind as it allows to analyze bigger and bigger patterns. This will be extremely valuable for medical research. Further, industry will benefit from because quality control can be done at a whole new level. Complex systems, from train logistics to climate science, are all likely to benefit from AI. In the end, we need to identify the areas where you can really make leaps that are not possible without AI.

ARE THERE CERTAIN ASPECTS IN YOUR CURRENT WORK AS A JOURNALIST THAT YOU ARE EXCITED ABOUT OR LOOK FORWARD TO WHEN THINKING ABOUT AI?

Here at Süddeutsche Zeitung, we already use it. For instance, in the case of the Panama papers, we had unbelievable amounts of data to go through. To do so, we programmed our own algorithms, which are basically AI, because they are self-learning and self-improving. Another example are the last regional elections in Bavaria, when we provided online analyses for all 91 voting districts. The analyses were run and compiled by AI as well – because we would not have the manpower to do them manually. Today, we have a whole department of data journalists. This opens up new forms of investigative journalism that have not been possible before.

WHERE ARE THE LIMITS TO AI IN YOUR AREA?

In journalism and creative work, AI is most useful for all matters that involve metrics, such as business, sports or election reporting. Also, AI can do technical writing. But AI will never be able to write a good arts review, or commentary, or essay – at least not in the immediate future. AI will always remain a karaoke machine of the human mind. We will reach interesting levels of perfection, but we will never reach the full spectrum. As soon as you write about a human experience, AI has limits. AI will never suffer. AI will never have the feeling of mortality. AI will never love. In philosophy, there is a phenomenon called Qualia, which refers to your personal experience that can never be copied – not even from human to human. AI will never be able to generate that Qualia. So even a perfect AI would always remain a philosophical zombie.

ABOUT THE PROJECT

Each year, the Future of Leadership Initiative selects a Grand Challenge of Leadership – a pressing global and complex issue that requires an interdisciplinary understanding and a participatory approach to be tackled. An integral part of our activities is the dialogue with global thought leaders to learn from their experiences and insights. Our goal in 2019: To explore how the rise of technologies, such as artificial intelligence and robotics, will impact the future of our global society and economy. Student researcher Alonge Olusegun and Dr. Laura Bechthold have run this interview with Andrian Kreye.

LET US MOVE TO MORE CRITICAL ASPECTS OF MACHINE INTELLIGENCE. LET US THINK AHEAD AGAIN. IN WHICH AREAS OF SOCIETY COULD MACHINE INTELLIGENCE HAVE A NEGATIVE IMPACT?

There are many science fiction dystopias in which AI takes over the world, enslaves humans or destroys humankind. I think those are not valid concepts. They’re fed by literary and film visions. They are there to entertain and to tickle your fears. Let’s take, for instance, the thought experiment of the paperclip maximizer by Nick Bostrom. It says if you program an AI to produce paper clips, the AI will just do so at any costs. It will start extracting all resources and turn everything into paper clips. It will not care about the environment. At some point, after having used all other resources, it might even go after humans to turn our bodies into paper clips. And as soon as the paperclip maximizer has destroyed this planet, it will move to the next planet and start producing paper clips there. I think this is an interesting thought experiment. But why would anybody construct an AI that could become destructive without an off switch? And if a machine is really intelligent, why would you assume that it wants to conquer anything? Intelligence does not automatically equal conquering, dominating or destroying something – that is just a very human thing. We have been programmed by evolution to dominate and to win in competitions, but an AI does not have that. There is no reason why an AI would aspire this.

Extreme dystopias can distract from the real dangers of AI. For instance, you will find a little bit of the paper clip analogy in social media, which was initially programmed to do very neutral things, like connecting people. But the algorithm was programmed to make people stay on the site as long as possible. Soon, the algorithm figured out that people stayed longer on the system if they were enraged and started optimizing for that. This induced an erosion of the public discourse until actual lynch mobs and killings occurred – triggered by social media. Another real danger of automation is poverty. A lot of jobs, especially many white-collar jobs like accounting or parts of the legal system, will be automated. A long-term result of automation could be widespread poverty.

WHAT ARE VITAL QUESTIONS BUSINESS LEADERS HAVE TO ASK THEMSELVES, THEIR TEAMS; AND THEIR ORGANISATIONS FOR THE FUTURE?

Leaders need to prepare themselves to be leaders of humans and machines. You will need to lead machines just as you lead a workforce. With that, you will have to face the ethical and at one point maybe even the legal obligation to put your human workers first and your machine workers second. One basic question that you should always ask is: Am I adding value with this AI or am I just automating for efficiency and for savings? In the free market world, it’s going to be difficult to keep people from trying to work more efficiently and cheaper. But this is an ethical problem that every leader will have to face.

IN YOUR OPPINION, WHO IS THE RESPONSIBLE ENTITY TO SOLVE THE ETHICAL QUESTIONS BROUGHT ABOUT BY AI?

I think society is in charge and everybody has to play a part. One of the first steps would probably be to educate developers, engineers, and scientists in ethical questions. I conducted roundtables on that subject with people from the humanities, engineers, and business people. Interestingly, the engineers were often those who were talking the least. At the same time, they were the most excited afterwards, because this was the first time that someone would talk to them about the topic. We need ethical education in the engineering world and when setting up new systems, we need to follow ethical guidelines like those of the IEEE. Maybe we need a security council for the future where we bring people like engineers, scientists, business people, philosophers and experts from the humanities (like historians, lawyers and politicians) together to talk about these things. Companies should start to invite philosophers or social scientists to figure out what their role is. What do we need? What do we want?

FROM A LEADERSHIP PERSPECTIVE, WHAT WILL BE THE MOST IMPORTANT SKILLS IN THE FUTURE? WHAT SHOULD BE EDUCATED MORE OR LESS THAN TODAY?

I think digital education is important. I don’t think you have to teach coding in school, because that is a mechanical thing which will become less relevant soon. But you have to educate children about concepts. What does AI mean, what can it become? I am from a generation that did not grow up with AI, so I have to be educated in a different way. But the good thing is that people are really interested in AI topics, so we can leverage that interest. People should know what algorithms and basic principles of self-learning systems are, but I don’t think that everybody needs to be able to program them.

WHAT IS YOUR RECOMMENDATION FOR YOUNG TALENTS HOW TO PREPARE THEMSELVES FOR A FUTURE WITH AI?

Read a lot. Get a broader horizon than your field of expertise. It is becoming more and more crucial in any business that you look beyond your actual skills and figure out the context. Also, critical thinking and creating common sense are skill that will become really important to anybody’s work in the future. You can’t stop the future, but you can create your own. That is everybody’s task for the time being. And AI is absolutely exciting. Just don’t become too euphoric.

Share this post

Leave a Comment

More posts:

leadership

The Role of Leadership in the Digital Era

In Berlin, spring comes with re:publica. FLI spokesperson Laura Bechthold attended Europe’s most important conference on digital culture. Inspired by different sessions, here are some thoughts

Read More »