Sunday, February 21, 2021

AI made in Europe

How should Europe respond to the lightning-fast AI developments in China and the US? Three AI experts give answers from academic, business and society perspective. 

This article was published in the Dutch monthly technology magazine De Ingenieur in February 2021. The magazine kindly gave permission to publish a freely available English translation of the original article.


What should Europe do from an academic perspective?  

The EU countries may have jumped into the air after the election of Joe Biden as the new president of the US, but it is expected that the EU will be much more dependent on itself than before, also under Biden’s presidency. This also applies in the field of artificial intelligence (AI). The U.S. will once again start to strengthen ties with the EU more than under Donald Trump, but mainly because of a common interest in the geopolitical battle against China.

China wants to be the world leader in the field of artificial intelligence by 2030. To do so, it has drawn up a concrete roadmap: first catch up, become world leader in a few AI areas in five years, and lead in the entire AI field in ten years. The US, the current world leader in many AI fields, has turned more inward in recent years. Also in the field of AI, the rule in the US is: own business first.

Holger Hoos is professor of machine learning at Leiden University and chairman of the board of CLAIRE, an organization founded by the European AI community. CLAIRE’s goal: to make Europe a world leader in human-centered, reliable AI. How does he think the EU should respond to AI developments in China and the US?

Hoos: “In the US, industry is leading, in China the government. The best niche for Europe is to focus on human-centered AI: ‘AI for Good, AI for All’. Let’s develop AI that is in line with European values and benefits all citizens. For example, think of AI that contributes to the 17 Sustainable Development Goals set by the UN.”

Hoos emphasizes that the EU can only become a world leader in human-centered AI if it also shows leadership in AI technology development and basic research. Hoos: “It is wrong to think that we can buy technology from the US or China and then roll it out in Europe. You will only get human-centered AI if you build it from the ground up yourself according to the European values that we consider important. And we will also have to do the fundamental research ourselves and not leave it to the big American tech companies.”

In recent years, the EU has shown itself decisive when it comes to regulating digital technology, for example with the adoption of the GDPR law in 2018 (for protecting the data and privacy of European citizens), and with the imposition of fines on tech companies that do not comply with European rules. Hoos agrees that the EU should also lead the world in AI regulation, but issues a warning: “You cannot be a world leader in regulation without also being a world leader in AI technology and research.”

Although the EU itself does not want to talk about a global AI race between Europe, the US and China, the fact is that there is far more demand for AI expertise and talent worldwide than supply. Companies are eager to buy AI talent rapidly away from universities. Hoos sees a great danger in this: “It’s not good when the best people go to work in the private sector and not in the public sector. And it’s also not good when the best European AI developers go to work en masse for American business.”

How can we keep talent in Europe when American companies offer sky-high salaries? Hoos would like to see Europe set up an equivalent of the Human Genome Project. That public project provided a counterweight to the private project to unravel the human genome. The information generated by the unraveling of the human genome was not to become private property, but had to remain in the public domain. Hoos: “We should do something like that in the AI field as well. Yes, it will cost money, but I am convinced that this investment will pay off twice over.”

What is still lacking in Europe, however, according to Hoos, is cooperation and critical mass. “In the Netherlands the Dutch AI Coalition has been founded. Nice, but how much energy are they putting into European cooperation? Very little. The same goes for Germany, where I come from. So you get too much annoying competition. The fact is that European countries are too small on their own to compete with China and the US. We have to work together, we have no other choice.”

That’s why the European AI research community created CLAIRE, for collaboration across Europe and in all areas of AI, from machine reasoning to machine learning. Hoos: “National governments need this kind of leverage initiative to achieve Europe’s ambitious goals. Look at the success of CERN in Geneva. That’s the kind of impact and success we need to pursue with human-centered AI.”

What should Europe do from a business perspective?

Maarten de Rijke is University Professor of AI at the University of Amsterdam and director of the ICAI: Innovation Center for Artificial Intelligence, which was founded in 2018. He is also vice president of Personalization and Relevance at Ahold Delhaize. As someone who has one foot in the university world and the other in the corporate world, what does he think Europe should do in response to the US and China?

“First of all, Europe should keep control of data, algorithms, digital infrastructure and rules of the game for algorithms. We need to stop sending all our data to a US based cloud. We need to build our own infrastructure. Second, encourage public-private partnerships. That is exactly what we are aiming for with ICAI. Europe must pull much harder on such collaborations. For example, set up living labs for AI experiments. If we try nothing, we learn nothing. And thirdly, set hard rules for example for the storage and transmission of data."

ICAI now consists of 16 labs spread across seven Dutch cities, with more on the way. In these labs, knowledge institutions work together with companies such as Ahold Delhaize, Qualcomm, Elsevier, ING and KPN, but also with a government agency such as the Dutch Police. Together they work for five years on AI research projects. PhD students work one or two days a week in the R&D department of a company.

De Rijke is not afraid that universities will lose their independence through these collaborations with industry. On the contrary. “Universities must address social issues,” he says. “‘Great science with great impact’ is our motto. It is precisely because of this collaboration that university researchers are able to come up with new questions. For example: how do you make a certain application scalable? Or: How do you provide a safety guarantee for algorithms in a self-driving car?”

‘United in diversity’ is the official motto of the EU. Europe is diverse in cultures, languages and values. And also diverse in the tastes of different AI communities: diverse tastes of machine reasoning and diverse tastes of machine learning, the two main branches of AI. “All this diversity is precisely what Europe can use to its advantage,” concludes De Rijke. “Within the EU, we can make different products with different flavors for different cultures.”

What should Europe do from a society perspective? 

When an AI application is accepted across the EU, there is a good chance that the application can be rolled out with confidence elsewhere in the world, agrees Catelijne Muller, president of ALLAI, an independent organization promoting responsible AI technology. She was a member of the EU High Level Expert Group on AI that advised the European Commission in recent years. “My message — which is the same as the one of the High Level Expert Group — is that Europe needs to commit to responsible AI,” Muller says. “On the one hand, AI must comply with existing laws. On the other hand, the EU must create new laws if they are currently lacking for certain AI applications. In the EU High Level Expert Group on AI, we have drawn up seven ethical guidelines that AI must comply with. In doing so, we started from universal human rights.”

Muller is convinced that if Europe takes the time to develop responsible AI, it will eventually be better equipped. “Then we will get better AI, which is safer, more robust and with minimal negative effects. You see in the U.S. what happens when you don’t. Some U.S. states have banned facial recognition in public spaces or banned surveillance robots on the street.”

That responsible AI development would come at the expense of the speed of innovation Muller finds short-sighted. “Regulation is not there to make life harder,” she says. “Nor for companies. In fact, regulation also ensures that companies can operate on a level playing field.”

Her own organization ALLAI, together with the Dutch Association of Insurers, has translated the seven European guidelines for ethical AI application into seven guidelines for the use of AI by Dutch insurers. Muller: “Insurance companies have a lot of data with which they can estimate the probability that someone will suffer damage. But how should they deal with that data? You can go further and further: taking driving behavior into account, mapping routes...What do we find acceptable and what not? That’s what the guidelines we’ve jointly drawn up provide an answer to.”

The EU is currently working on a Digital Service Act, which should create a new legal framework for digital services and also curb the disproportionate power of big tech companies. In doing so, the EU is trying to deftly navigate between the interests of individuals, society and business. But for now, the EU seems more successful in playing referee than in playing world leader in AI research and technology. Europe has a lot of potential in the AI field, Europe has world-class research, but the real will for European AI cooperation is still missing.

--------------------------------------------------------------

European AI in figures

AI research:

Europe has 50% more AI researchers than the US, and twice as many as China.

Europe publishes 32% of all AI papers. For the past 20 years, Europe has led the world.


AI hubs:

(1. San Francisco. 2. New York. 3 Boston. 4. Beijing. 5. London...50. Amsterdam.)

US: 18 of the top 25

Asia: 4 of the top 25

Europe: 3 of the top 25


Private AI investments:

US: 46%

China: 36%

Europe: 8%

Rest: 10%



AI companies per million employed:

US: 10.5

Europe: 3.1

China: 0.3

--------------------------------------------------------------

AI in European business


The five largest Western tech giants are American, and all are also strong in AI: Amazon, Apple, Facebook, Google (Alphabet) and Microsoft. Their business strategies have shifted in two decades from ‘digital first’ to ‘mobile first’ to ‘AI first’. China’s three largest tech companies Alibaba, Baidu and Tencent are all also betting heavily on AI. To be the world leader in the AI field by 2030, as is the big Chinese goal, the government is working closely with the business community, very differently than in the U.S. and in Europe.

What are the big tech companies in Europe? Few can name them. Recently, chip machine manufacturer ASML from Veldhoven in the Netherlands became the most valuable European tech company, ahead of German software company SAP. ASML uses plenty of AI, mainly to adjust the machines' settings based on the data generated by the chip machines.

Other European companies that are using AI to the fullest include e-commerce companies such as Zalando and Booking, DeepL (a translation engine that works at least as well as Google Translate), music service Spotify and meal delivery companies such as Deliveroo, Takeaway and HelloFresh. In addition, Europe has traditionally always been strong in industrial robotics in which AI is used extensively, for example for image recognition. This includes companies such as Swiss-Swedish ABB, and the originally German Kuka, which came into Chinese hands in 2016 — an acquisition that Germany has since regretted. Of more recent years is Denmark’s Universal Robots, which has become the world leader in lightweight, flexible robotic arms. The German car industry is also betting heavily on AI, mainly to make cars more autonomous.

Outside of robotics, there is one European AI company that has quickly gained world fame: London-based and founded DeepMind, made famous by their computer AlphaGo which managed to beat the best human go-player in 2016. DeepMind is an AI company with the ultimate goal of mimicking human intelligence in a machine. It was founded in 2010 and sold to Google in 2014 for an estimated more than 600 million euros.

Telecom giant Skype is also European in origin. It was founded in 2003 by a Swede, a Dane and four Estonians, but sold to American Microsoft in 2011. The sale of Kuka, DeepMind and Skype shows how vulnerable the European tech sector is to takeovers from the US and China.

--------------------------------------------------------------

COVID-19 impact on AI


Frenchman Jean Monnet, one of the founding fathers of the European Union, once said, “Europe will be forged in crises, and will be the sum of the solutions adopted for those crises.” This is an oft-cited quote that also hits the nail on the head in the current COVID-19 crisis. Many technology analysts expect the current pandemic to accelerate the implementation of AI in the public and private sectors by five to ten years.

How will Europe deal with this?

What was until recently met with a lot of resistance — working from home, home schooling, tele-meeting, tele-conferencing, eHealth — this year suddenly turned out to be possible and sometimes even beneficial. The experiences gained, positive and negative, will lead to new and better AI products and services. Smart application of AI makes countless business processes more efficient. Some companies can move from physical stores to online stores and, thanks in part to machine translation, extend their reach from their own country to the entire world.

The COVID-19 crisis also further exposed Europe’s dependence on U.S. tech companies. Within months of the outbreak of the pandemic, tech giant Amazon established itself in Italy. That country hardly had a good e-commerce infrastructure, unlike the Netherlands where, for example, a company like Bol.com was already big.

But especially in a crisis, where we tend to want to do everything quickly, it is important to ensure responsible AI development. That is why the independent Dutch organization ALLAI has launched a new project: ‘Responsible AI & Corona’. ALLAI is setting up an observatory to monitor which AI applications are being accelerated (e.g. AI controlling the 1,5-meter distance in public spaces). ALLAI is also developing a QuickScan to help organizations assess whether a particular AI application is technically, legally and ethically sound.

--------------------------------------------------------------

Hyperlinks

EU-report ‘AI — A European perspective’: https://publications.jrc.ec.europa.eu/repository/bitstream/JRC113826/ai-flagship-report-online.pdf

European Parliament and AI AI: https://epthinktank.eu/2020/11/26/stoa-establishes-a-centre-of-dialogue-and-expertise-on-ai/

High-Level Expert Group on Artificial Intelligence: https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

WRR Working Paper ‘Internationaal AI-beleid’ (2019): https://www.wrr.nl/publicaties/working-papers/2019/06/12/internationaal-ai-beleid

McKinsey-report (October 2020): How nine digital frontrunners can lead on AI in Europe: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/how-nine-digital-front-runners-can-lead-on-ai-in-europe





Thursday, February 11, 2021

Ada Lovelace - 19e eeuwse computer- en AI-pionier

Vandaag is het de Internationale Dag van Vrouwen en Meisjes in de Wetenschap. Speciaal voor deze dag publiceer ik alvast een klein stuk uit mijn in november van dit jaar te verschijnen boek "Kunstmatige intelligentie", een boek over wat iedereen zou moeten weten over AI. 

In het hoofdstuk over de geschiedenis van de AI vertel ik onder andere over Lady Ada Lovelace (1815-1852). Ada Lovelace wordt gezien als de eerste die een computerprogramma publiceerde en een van de eersten die de vraag of machines kunnen denken serieus nam. 



Ada Lovelace heeft een opvallend levensverhaal dat het vertellen waard is. Ze werd in 1815 geboren als dochter van Annabella Milbanke en de beroemde dichter Lord Byron. Haar vader was een op-en-top romanticus die fel gekant was tegen de opkomende automatische weefmachines. Haar moeder stimuleerde Ada juist om zich in de wiskunde te bekwamen in de hoop dat ze niet hetzelfde turbulente gevoelsleven als haar vader zou ontwikkelen. Ada kreeg van beide ouders iets mee. Ze zou juist wel geïnteresseerd raken in de machines die haar vader verafschuwde. Aan de andere kant had ze wel degelijk een poëtische inslag in haar wetenschappelijke werk.

Het huwelijk tussen Ada’s vader en moeder viel nog in het jaar van haar geboorte uit elkaar en Ada zou haar vader nooit meer zien. Een jaar na Ada’s geboorte, in 1816, bracht Lord Byron een tijdje door met de dichter Percy Shelley en diens toekomstige vrouw Mary aan het meer van Genève. Daar schreef Mary Shelley haar beroemde boek Frankenstein over het conflict tussen de mens en zijn kunstmatige creatie, tussen de wetenschapper Victor Frankenstein en het levende monster dat hij zou scheppen uit dode materie. Het boek werpt ook een vraag op die toevalligerwijs ook Ada zou gaan fascineren: kan de mens een machine bouwen die kan denken?

In 1833 leerde Ada Lovelace de wiskundige en uitvinder Charles Babbage kennen en raakte ze gefascineerd door zijn werk aan machines die cognitieve taken kunnen uitvoeren. De twee gingen samenwerken. Een jaar later ontwierp Babbage de Analytical Engine, een apparaat dat wordt gezien als de eerste mechanische computer. De technische realisatie bleek echter te moeilijk en Babbage slaagde er mede door geldgebrek niet in zijn wondermachine te bouwen. Zo’n tien jaar later, in 1843, beschreef Ada Lovelace hoe de Analytical Engine een bepaald soort getallen (Bernoulli-getallen) kan berekenen. Dit werk wordt tegenwoordig beschouwd als het eerste computerprogramma en Ada Lovelace als de eerste computerprogrammeur, hoewel het idee van programmeren in de tijd van Babbage en Lovelace niet bestond. 

Het eerste gepubliceerde computeralgoritme - door Ada Lovelace

Ver voor de eerste digitale computer in de 20e eeuw het licht zou zien, schreef Ada Lovelace in haar notities al over het concept van een algemene informatieverwerkende machine die niet alleen getallen kon verwerken maar alle soorten informatie die in symbolen kon worden omgezet, van woorden tot muziek. Geïnspireerd door de Analytical Engine dacht ze na over de vraag of machines iets zouden kunnen bedenken wat mensen er niet van te voren hebben ingestopt. Ada dacht van niet. Ze schreef in haar aantekeningen: “De Analytical Engine heeft geen enkele pretentie om iets origineels te creëren. De machine kan uitvoeren wat wij het opdragen om uit te voeren.”

Met haar gefilosofeer over de mogelijkheden van informatieverwerkende machines was Ada Lovelace haar tijd een eeuw vooruit. In de 20e eeuw werd de programmeertaal Ada naar haar vernoemd. 

Elk jaar op de tweede dinsdag van oktober is het trouwens Internationale Ada Lovelace Dag, een viering van de prestaties van vrouwen op het gebied van wetenschap, techniek en wiskunde.


Monday, February 8, 2021

AI made in Europe

Hoe moet Europa antwoorden op de razendsnelle AI-ontwikkelingen in China en de VS? Drie AI-experts geven antwoorden vanuit universitair-, bedrijfs- en maatschappelijk perspectief: prof. Holger Hoos, prof. Maarten de Rijke en Catelijne Muller. 



Dit artikel is gepubliceerd in technologietijdschrift De Ingenieur van februari 2021

De EU-landen mogen dan een gat in de lucht zijn gesprongen na de verkiezing van Joe Biden tot de nieuwe president van de VS, de verwachting is dat de EU ook onder Bidens presidentschap veel meer dan vroeger op zichzelf aangewezen zal zijn. Dat geldt ook op het terrein van kunstmatige intelligentie (artificial intelligence - AI). De VS zullen de banden met de EU weer meer gaan aanhalen dan onder Donald Trump, maar dan vooral vanwege een gemeenschappelijk belang in de geopolitieke strijd tegen China.

China wil in 2030 wereldleider zijn op het terrein van kunstmatige intelligentie. Daarvoor heeft het een concreet stappenplan opgesteld: eerst de achterstand inlopen, over vijf jaar wereldleider worden op enkele AI-terreinen, en over tien jaar in het hele AI-veld voorop lopen. De VS, op vele AI-terreinen de huidige wereldleider, heeft zich in de afgelopen jaren meer naar binnen gekeerd. Ook op AI-gebied geldt in de VS: eigen bedrijfsleven eerst.

Holger Hoos is hoogleraar machine learning aan de Universiteit Leiden en voorzitter van de raad van bestuur van CLAIRE, een organisatie die is opgericht door de Europese AI-gemeenschap. Doel van CLAIRE: Europa wereldleider maken op het gebied van mensgerichte, betrouwbare AI. Hoe vindt hij dat de EU moet reageren op de AI-ontwikkelingen in China en de VS?

Hoos: “In de VS is het bedrijfsleven leidend, in China de overheid. De beste niche voor Europa is om ons te richten op AI waarin de mens centraal staat: ‘AI for Good, AI for All’. Laten we AI ontwikkelen die in overeenstemming is met de Europese waarden en die ten goede komt aan alle burgers. Denk bijvoorbeeld aan AI die bijdraagt aan de 17 doelen van duurzame ontwikkeling die de VN heeft opgesteld.”

Hoos benadrukt dat de EU alleen wereldleider in mensgerichte AI kan worden wanneer het ook leiderschap toont in de ontwikkeling van AI-technologie en in het fundamentele onderzoek. Hoos: “Het is verkeerd om te denken dat we technologie uit de VS of China kunnen kopen om het daarna in Europa uit te rollen. Mensgerichte AI krijg je alleen als je het van de grond af aan zelf bouwt volgens de Europese waarden die wij belangrijk vinden. En ook het fundamentele onderzoek zullen we zelf moeten doen en niet moeten overlaten aan de grote Amerikaanse tech-bedrijven.”

Lees het hele artikel in De Ingenieur 

Friday, February 5, 2021

The role of humans in the digital society


Together with professor Virginia Dignum I wrote a chapter on the role of humans in the digital society for the book "Faster than the Future", published by the Digital Future Society in Barcelona.


Here is the introduction of the chapter:

From the 20th century inventions of the computer and the internet gradually a whole new set of digital technologies have evolved: algorithms, big data, artificial intelligence, robotics, biometrics, virtual and augmented reality, and smartphone networks like 5G, to name a number of important ones.

Whereas humanity created these digital technologies, in turn these technologies shape society, and even what it means to be human. Digital technologies impact core human values like autonomy, control, safety, security, privacy, dignity, justice and power structures. Technological development is like an evolutionary process in which humans and technology evolve in a symbiotic way creating both new opportunities and new risks. First we create technology, then it recreates us.

The central question in this chapter is how to shape digitisation so that it enables the society that its citizens want. In order to answer this question, we first need to think about the ways in which people are involved. An open, inclusive approach where everybody is welcome to participate is needed to design technology so that shared human values are built in the technology. We need to take into account that people have different cultural, social and economic backgrounds, different levels of involvement and different interests. That causes technology to have different effects on different groups. Consequently, different groups have different needs and views about the role of digital technologies in society. Engineers are those who ultimately will implement technology to meet societal principles and human values, but it is policy makers, regulators and society in general who can set and enforce the purpose.

Each individual and socio-cultural environment prioritises different moral and societal values. That is, which society citizens want should be decided in a democratic process with at its core the individual’s right to self-determination. The implementation of digital technologies needs therefore to consider the socio-political environment it is inserted into. However, a digital technology like artificial intelligence (AI) might impact self-determination by taking decisions that people used to take themselves, which in turn will impact the democratic processes and ultimately society itself.

Dealing with these issues requires a human-centred approach to digital technologies. This means that the leading requirements for digital technology should be: empowering humans, protecting humans and facilitate engagement for social transformation. Incentives for ensuring these functions can be both regulatory or market based.

A human-centred approach also leads to the question of human control in a society in which machines operate more and more autonomously. How much control should humans have over digital systems? In many applications the concept of ‘human plus machine’ is a more fruitful concept than the concept of ‘human versus machine’.

The whole chapter can be read in the book "Faster than the Future", which can be downloaded for free here (DFS Book).
 








Monday, February 1, 2021

De schrijvende machine: GPT-3

GPT-3 is een tekstgenerator die menselijke teksten in allerlei genres schrijft. In de Listening To The Furture Podcast ging ik in gesprek met Jarno Duursma over alle ins en outs van deze taalmachine, van ethiek tot techniek.



Beluister de podcast hier.

Eerder schreef ik voor NRC Handelsblad een uitgebreid artikel over GPT-3.