Sunday, February 21, 2021

AI made in Europe

How should Europe respond to the lightning-fast AI developments in China and the US? Three AI experts give answers from academic, business and society perspective. 

This article was published in the Dutch monthly technology magazine De Ingenieur in February 2021. The magazine kindly gave permission to publish a freely available English translation of the original article.


What should Europe do from an academic perspective?  

The EU countries may have jumped into the air after the election of Joe Biden as the new president of the US, but it is expected that the EU will be much more dependent on itself than before, also under Biden’s presidency. This also applies in the field of artificial intelligence (AI). The U.S. will once again start to strengthen ties with the EU more than under Donald Trump, but mainly because of a common interest in the geopolitical battle against China.

China wants to be the world leader in the field of artificial intelligence by 2030. To do so, it has drawn up a concrete roadmap: first catch up, become world leader in a few AI areas in five years, and lead in the entire AI field in ten years. The US, the current world leader in many AI fields, has turned more inward in recent years. Also in the field of AI, the rule in the US is: own business first.

Holger Hoos is professor of machine learning at Leiden University and chairman of the board of CLAIRE, an organization founded by the European AI community. CLAIRE’s goal: to make Europe a world leader in human-centered, reliable AI. How does he think the EU should respond to AI developments in China and the US?

Hoos: “In the US, industry is leading, in China the government. The best niche for Europe is to focus on human-centered AI: ‘AI for Good, AI for All’. Let’s develop AI that is in line with European values and benefits all citizens. For example, think of AI that contributes to the 17 Sustainable Development Goals set by the UN.”

Hoos emphasizes that the EU can only become a world leader in human-centered AI if it also shows leadership in AI technology development and basic research. Hoos: “It is wrong to think that we can buy technology from the US or China and then roll it out in Europe. You will only get human-centered AI if you build it from the ground up yourself according to the European values that we consider important. And we will also have to do the fundamental research ourselves and not leave it to the big American tech companies.”

In recent years, the EU has shown itself decisive when it comes to regulating digital technology, for example with the adoption of the GDPR law in 2018 (for protecting the data and privacy of European citizens), and with the imposition of fines on tech companies that do not comply with European rules. Hoos agrees that the EU should also lead the world in AI regulation, but issues a warning: “You cannot be a world leader in regulation without also being a world leader in AI technology and research.”

Although the EU itself does not want to talk about a global AI race between Europe, the US and China, the fact is that there is far more demand for AI expertise and talent worldwide than supply. Companies are eager to buy AI talent rapidly away from universities. Hoos sees a great danger in this: “It’s not good when the best people go to work in the private sector and not in the public sector. And it’s also not good when the best European AI developers go to work en masse for American business.”

How can we keep talent in Europe when American companies offer sky-high salaries? Hoos would like to see Europe set up an equivalent of the Human Genome Project. That public project provided a counterweight to the private project to unravel the human genome. The information generated by the unraveling of the human genome was not to become private property, but had to remain in the public domain. Hoos: “We should do something like that in the AI field as well. Yes, it will cost money, but I am convinced that this investment will pay off twice over.”

What is still lacking in Europe, however, according to Hoos, is cooperation and critical mass. “In the Netherlands the Dutch AI Coalition has been founded. Nice, but how much energy are they putting into European cooperation? Very little. The same goes for Germany, where I come from. So you get too much annoying competition. The fact is that European countries are too small on their own to compete with China and the US. We have to work together, we have no other choice.”

That’s why the European AI research community created CLAIRE, for collaboration across Europe and in all areas of AI, from machine reasoning to machine learning. Hoos: “National governments need this kind of leverage initiative to achieve Europe’s ambitious goals. Look at the success of CERN in Geneva. That’s the kind of impact and success we need to pursue with human-centered AI.”

What should Europe do from a business perspective?

Maarten de Rijke is University Professor of AI at the University of Amsterdam and director of the ICAI: Innovation Center for Artificial Intelligence, which was founded in 2018. He is also vice president of Personalization and Relevance at Ahold Delhaize. As someone who has one foot in the university world and the other in the corporate world, what does he think Europe should do in response to the US and China?

“First of all, Europe should keep control of data, algorithms, digital infrastructure and rules of the game for algorithms. We need to stop sending all our data to a US based cloud. We need to build our own infrastructure. Second, encourage public-private partnerships. That is exactly what we are aiming for with ICAI. Europe must pull much harder on such collaborations. For example, set up living labs for AI experiments. If we try nothing, we learn nothing. And thirdly, set hard rules for example for the storage and transmission of data."

ICAI now consists of 16 labs spread across seven Dutch cities, with more on the way. In these labs, knowledge institutions work together with companies such as Ahold Delhaize, Qualcomm, Elsevier, ING and KPN, but also with a government agency such as the Dutch Police. Together they work for five years on AI research projects. PhD students work one or two days a week in the R&D department of a company.

De Rijke is not afraid that universities will lose their independence through these collaborations with industry. On the contrary. “Universities must address social issues,” he says. “‘Great science with great impact’ is our motto. It is precisely because of this collaboration that university researchers are able to come up with new questions. For example: how do you make a certain application scalable? Or: How do you provide a safety guarantee for algorithms in a self-driving car?”

‘United in diversity’ is the official motto of the EU. Europe is diverse in cultures, languages and values. And also diverse in the tastes of different AI communities: diverse tastes of machine reasoning and diverse tastes of machine learning, the two main branches of AI. “All this diversity is precisely what Europe can use to its advantage,” concludes De Rijke. “Within the EU, we can make different products with different flavors for different cultures.”

What should Europe do from a society perspective? 

When an AI application is accepted across the EU, there is a good chance that the application can be rolled out with confidence elsewhere in the world, agrees Catelijne Muller, president of ALLAI, an independent organization promoting responsible AI technology. She was a member of the EU High Level Expert Group on AI that advised the European Commission in recent years. “My message — which is the same as the one of the High Level Expert Group — is that Europe needs to commit to responsible AI,” Muller says. “On the one hand, AI must comply with existing laws. On the other hand, the EU must create new laws if they are currently lacking for certain AI applications. In the EU High Level Expert Group on AI, we have drawn up seven ethical guidelines that AI must comply with. In doing so, we started from universal human rights.”

Muller is convinced that if Europe takes the time to develop responsible AI, it will eventually be better equipped. “Then we will get better AI, which is safer, more robust and with minimal negative effects. You see in the U.S. what happens when you don’t. Some U.S. states have banned facial recognition in public spaces or banned surveillance robots on the street.”

That responsible AI development would come at the expense of the speed of innovation Muller finds short-sighted. “Regulation is not there to make life harder,” she says. “Nor for companies. In fact, regulation also ensures that companies can operate on a level playing field.”

Her own organization ALLAI, together with the Dutch Association of Insurers, has translated the seven European guidelines for ethical AI application into seven guidelines for the use of AI by Dutch insurers. Muller: “Insurance companies have a lot of data with which they can estimate the probability that someone will suffer damage. But how should they deal with that data? You can go further and further: taking driving behavior into account, mapping routes...What do we find acceptable and what not? That’s what the guidelines we’ve jointly drawn up provide an answer to.”

The EU is currently working on a Digital Service Act, which should create a new legal framework for digital services and also curb the disproportionate power of big tech companies. In doing so, the EU is trying to deftly navigate between the interests of individuals, society and business. But for now, the EU seems more successful in playing referee than in playing world leader in AI research and technology. Europe has a lot of potential in the AI field, Europe has world-class research, but the real will for European AI cooperation is still missing.

--------------------------------------------------------------

European AI in figures

AI research:

Europe has 50% more AI researchers than the US, and twice as many as China.

Europe publishes 32% of all AI papers. For the past 20 years, Europe has led the world.


AI hubs:

(1. San Francisco. 2. New York. 3 Boston. 4. Beijing. 5. London...50. Amsterdam.)

US: 18 of the top 25

Asia: 4 of the top 25

Europe: 3 of the top 25


Private AI investments:

US: 46%

China: 36%

Europe: 8%

Rest: 10%



AI companies per million employed:

US: 10.5

Europe: 3.1

China: 0.3

--------------------------------------------------------------

AI in European business


The five largest Western tech giants are American, and all are also strong in AI: Amazon, Apple, Facebook, Google (Alphabet) and Microsoft. Their business strategies have shifted in two decades from ‘digital first’ to ‘mobile first’ to ‘AI first’. China’s three largest tech companies Alibaba, Baidu and Tencent are all also betting heavily on AI. To be the world leader in the AI field by 2030, as is the big Chinese goal, the government is working closely with the business community, very differently than in the U.S. and in Europe.

What are the big tech companies in Europe? Few can name them. Recently, chip machine manufacturer ASML from Veldhoven in the Netherlands became the most valuable European tech company, ahead of German software company SAP. ASML uses plenty of AI, mainly to adjust the machines' settings based on the data generated by the chip machines.

Other European companies that are using AI to the fullest include e-commerce companies such as Zalando and Booking, DeepL (a translation engine that works at least as well as Google Translate), music service Spotify and meal delivery companies such as Deliveroo, Takeaway and HelloFresh. In addition, Europe has traditionally always been strong in industrial robotics in which AI is used extensively, for example for image recognition. This includes companies such as Swiss-Swedish ABB, and the originally German Kuka, which came into Chinese hands in 2016 — an acquisition that Germany has since regretted. Of more recent years is Denmark’s Universal Robots, which has become the world leader in lightweight, flexible robotic arms. The German car industry is also betting heavily on AI, mainly to make cars more autonomous.

Outside of robotics, there is one European AI company that has quickly gained world fame: London-based and founded DeepMind, made famous by their computer AlphaGo which managed to beat the best human go-player in 2016. DeepMind is an AI company with the ultimate goal of mimicking human intelligence in a machine. It was founded in 2010 and sold to Google in 2014 for an estimated more than 600 million euros.

Telecom giant Skype is also European in origin. It was founded in 2003 by a Swede, a Dane and four Estonians, but sold to American Microsoft in 2011. The sale of Kuka, DeepMind and Skype shows how vulnerable the European tech sector is to takeovers from the US and China.

--------------------------------------------------------------

COVID-19 impact on AI


Frenchman Jean Monnet, one of the founding fathers of the European Union, once said, “Europe will be forged in crises, and will be the sum of the solutions adopted for those crises.” This is an oft-cited quote that also hits the nail on the head in the current COVID-19 crisis. Many technology analysts expect the current pandemic to accelerate the implementation of AI in the public and private sectors by five to ten years.

How will Europe deal with this?

What was until recently met with a lot of resistance — working from home, home schooling, tele-meeting, tele-conferencing, eHealth — this year suddenly turned out to be possible and sometimes even beneficial. The experiences gained, positive and negative, will lead to new and better AI products and services. Smart application of AI makes countless business processes more efficient. Some companies can move from physical stores to online stores and, thanks in part to machine translation, extend their reach from their own country to the entire world.

The COVID-19 crisis also further exposed Europe’s dependence on U.S. tech companies. Within months of the outbreak of the pandemic, tech giant Amazon established itself in Italy. That country hardly had a good e-commerce infrastructure, unlike the Netherlands where, for example, a company like Bol.com was already big.

But especially in a crisis, where we tend to want to do everything quickly, it is important to ensure responsible AI development. That is why the independent Dutch organization ALLAI has launched a new project: ‘Responsible AI & Corona’. ALLAI is setting up an observatory to monitor which AI applications are being accelerated (e.g. AI controlling the 1,5-meter distance in public spaces). ALLAI is also developing a QuickScan to help organizations assess whether a particular AI application is technically, legally and ethically sound.

--------------------------------------------------------------

Hyperlinks

EU-report ‘AI — A European perspective’: https://publications.jrc.ec.europa.eu/repository/bitstream/JRC113826/ai-flagship-report-online.pdf

European Parliament and AI AI: https://epthinktank.eu/2020/11/26/stoa-establishes-a-centre-of-dialogue-and-expertise-on-ai/

High-Level Expert Group on Artificial Intelligence: https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

WRR Working Paper ‘Internationaal AI-beleid’ (2019): https://www.wrr.nl/publicaties/working-papers/2019/06/12/internationaal-ai-beleid

McKinsey-report (October 2020): How nine digital frontrunners can lead on AI in Europe: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/how-nine-digital-front-runners-can-lead-on-ai-in-europe