Everyone at Google works smart

Robots that work with or replace people. Software so intelligent it plays chess, jeopardy and go better than the greatest champions. Computers that can recognize images and diagnose diseases. Helpers in their cell phones who book flights. And machines that make and implement decisions independently - artificial intelligence is already changing our lives today.

And yet the technology is only just beginning. The features section is devoting a series to the questions that arise with the triumphant advance of artificial intelligence (AI): What will a future look like with machines that can do many things better than humans? Who is responsible for their actions? What may happen and what remains, despite all utopia, science fiction fantasy? In this episode, the Internet critic deals Evgeny Morozov with the interaction of big data and artificial intelligence.

Anyone who has ever tried to work with Google's platform for researchers - it's called "Google Scholar" - will come across a digital wall at some point. "Google Scholar" then requires the user to prove that he is not a robot. This is usually a combination of adventurously written letters and numbers that you have to read and transfer to an empty space.

Such blocks have long been ubiquitous on the Internet. They prevent automated programs that pretend they are human beings from causing damage, for example by stealing data from websites or buying up coveted concert tickets in large numbers.

For example, users should differentiate between different street signs, waterfalls from lakes or sports cars from small trucks. In addition, there are photos of streets broken down into dozens of individual parts, on which the user should recognize whether street signs indicate something specific, for example a direction or a prohibition.

The users train the computer - without even knowing it

However, it can be assumed that the users of Google Scholar will have a side effect when releasing such a block. You are probably teaching Google's self-driving cars how to navigate cities and read traffic signs.

This is Google's little secret: While other technology companies try as best as possible to describe a street sign for their artificial intelligence in detail using mathematical methods, Google can easily get a million users to teach this knowledge to the computer systems.

A famous "deep learning" experiment shows that this is possible. As a result, one of Google's systems learned what a cat looks like just by looking at stills from cat videos on YouTube. The trick with Google Scholar is now that Google can classify the photos of thousands of people and thereby learn everything worth knowing about the shape and content of a stop sign all the faster.

Behind every AI there is (at least) one clever head

Three lessons can be learned from this: First, numerous advances in the development of artificial intelligence are characterized by the "Chess Turk" effect - named after the person who hid inside the famous chess machine. In other words, while it is tempting to think that all the advances in artificial intelligence can only be attributed to the ingenuity of researchers or their vast resources, they are also a result of the ability to collect data, too classify and analyze. And it is precisely these tasks that are often done by unsuspecting people.

An example: "Google Now", a virtual assistant from Google, quite often knows exactly which articles a user liked. But he doesn't know because he has "cracked" the personality of the user, as one might assume. No, his recommendations are so good because, firstly, he knows which articles someone has read in the past and which articles they have liked; second, because he knows which other people have read these articles and whether they in turn liked them; and third, because the assistant knows which of these articles a particular user has not yet read.

By suggesting exactly these articles that fit but have not yet been read to a user, this creates a kind of wow effect: How clever this recommendation system is! The systems are rather dumb and algorithmic. They are only good because they have access to so much data that users have generated.

Google is harnessing scientists to the AI ​​cart

In addition, the whole collection and analysis of user data in connection with artificial intelligence means that one has to talk about a political dimension of AI technology. It is undoubtedly impressive that so many scientists doing research in publicly funded subjects are supporting Google Scholar - and thereby Google's self-driving cars. But why are they doing this work for free? Is the equivalent high enough? Is the benefit that Google Now offers me as valuable as that of my data for the company?

Google's logic is evident here: First of all, the company moves in and claims the data they are processing as its own and stows it under lock and key. Then it allows a small amount of free use of this data - but anyone who wants to use it to its full extent, as in large-scale research projects with Google Scholar, is immediately hitched to the AI ​​cart. For example, if scientists there want to find out about the forms of citation in articles about Roman architecture, they first have to teach Google something about the streets of Rome.

Whoever has the data has the power

Ultimately, everyone who campaigns for the shift of power - including power in the information sphere - to the citizens should realize that it is not enough to preserve the rights to data and to ensure that these are available to those citizens that they produce. It may feel good that you or an institution - my city or my association - can access the data that you give to Google for free. But will anyone who isn't Google get any benefit from it? Unfortunately no, because you also need the right infrastructure, not to mention the volume of data, in order to come to enlightening findings. You can't build a house with five bricks.

As a result, anyone who advocates shifting power away from Google and other such corporations must take a holistic approach and look at both sides of the coin; Concentrating on just one thing, for example handling data, does not do justice to the problem. Because one cannot seriously doubt that a large part of the progress in the field of artificial intelligence is as much a product of political and economic decisions - including decisions about the ownership of data - as of real progress in the theory and practice of AI.

There is no shortage of cheers about Silicon Valley, it is supposed to be the only place of progress. But it is certainly also worth asking what progress could have been achieved if the system there did not encourage data hoarding by a handful of corporations. It is precisely this consideration that can lead us in the direction of a delicate question: What would completely democratized access to AI technologies actually look like?

Evgeny Morozov is considered one of the harshest critics of Silicon Valley. His book "The Net Delusion" was an international bestseller. He is currently doing research on the history of the Internet at Harvard University Translation: Philipp Bovermann.