AI hardware: data was never the new oil
31.08.2020

AI hardware: data was never the new oil

Artificial Intelligence (AI) technology has long been predicted to have a massive impact on the economy, international relations and society at large. In recent years, these promises have started to be fulfilled, and these changes are only set to accelerate in the upcoming years and decades.

While images of killer robots and other sci-fi-horror stories dominate the fictional landscape, these are not the technologies most worrying for our immediate future. Current AI technology such as facial recognition is already deployed at large scale, especially in China, for the tracking and control of dissidents and the general population. It also seems clear that access to and control over advanced AI technologies will be crucial for economic power in the unfolding century.

A natural question to ask is what forms of leverage might exist to influence the access and use of AI technologies. In this report, I will explain why some axes of control (data and algorithms) are not viable routes to regulation, while others (in particular, the access to computer hardware) are promising candidates for strategic intervention, if so desired.

Anatomy of AI technology

Modern AI technology is built on the pricinple of “Deep Learning” (DL). DL is a method in which a “neural network” (NN) is “trained” on a collection of data to perform some task (such as sorting images by category, autocompleting text or playing a video game). There are three fundamental ingredients needed for a DL system:

  • Algorithms: The technical knowledge how to build neural networks and run their training algorithms, along with software implementations of this.
  • Data: DL tends to require rather large and specialized collections of data to train on.
  • Compute: The training process of a DL system requires very large computing resources to run.

The AI research world has a widespread culture of openness. Algorithms and other research progress is usually available to everyone shortly after their discovery. High quality, well tested software implementations of most state of the art methods are readily and freely available. Patents and corporate secrecy is a much smaller factor in AI research than in other technical fields, and the software the public has access to is rarely significantly inferior to what the cutting edge academic and corporate labs have. As such, algorithms can be considered close to “free”.

Data was never the new oil

Over the last years, data has been singled out as the bottleneck for most applications of DL. This has led to a common saying that “data is the new oil”, meaning “data is the new valuable economic resource everyone wants to have”.

But this analogy is flawed. But all data is not equal. A dataset of images of faces might be valuable for developing a facial recognition AI, but is of much less use to develop a document scanning AI. So, unlike oil, data is not fungible. One can’t exchange one chunk of data for any other, one needs the exact right kind of data, in sufficient quantities, for the task one is tackling.

One of the most important developments in the AI world over the last years has been the reliance on data becoming less acute as algorithms become more flexible and larger datasets become more readily available. It is now quite easy to acquire "big enough" datasets for most standard tasks (perhaps ~70% of the most common tasks one might want to solve with AI), and modern algorithms allow the extraction of more performance from the same amount of data, with some extra work. This is in large parts thanks to recent breakthrough progress in "Unsupervised Learning" methods, which can learn useful tasks from large amounts of non-specific unlabeled data. OpenAI's GPT3 model is a recent famous example of this. GPT3 performs state of the art (or close to it) on many tasks it was never explicitly trained on. Instead, it was simply trained on very large collections of text scraped from the internet.

These developments point towards a different candidate as the new oil, something valuable and fungible: Computing hardware.

Computing power is the new bottleneck

DL training has very special demands on the hardware it is run on. Any modern computer is run primarily by a Central Processing Unit (CPU). These chips are extremely flexible in their capabilities, but lack the raw power of more specialized chips. Modern DL training is so computationally heavy that it is no longer feasible to run on anything but very modern, specialized AI chips. The most important class of such AI chips are Graphics Processing Units (GPUs), which were originally designed for graphics applications but turned out to have the exact properties DL applications needed. Other, even more specialized chips, from companies such as Google, Graphcore (UK), Cambricon (China) and others, exist as well and are likely to become even more important in the future.

CPUs and even specialized chips that are a few years old in general are unsuitable for use in cutting edge DL work. The most cutting edge DL systems are consistently built on top of the most cutting edge hardware. The supply of such cutting edge chips is limited and expensive, building a high end DL supercomputer costs millions of dollars.

The life cycle of a computing chip

Simplifying the many steps involved in producing a cutting edge chip, there are three main industries involved:

Chip design
Semiconductor Manufacturing Equipment (SME) Production
Chip Fabrication (“fabs”)

Very few large companies dominate all phases of high end chip production, with the vast majority of them located in western countries, along with Japan, South Korea and Taiwan.

Chip design, being a purely software driven process, can theoretically be performed anywhere, but much of the work is dominated by the United States. Major players in this space include NVIDIA, Intel, AMD and Google.

SME production is the most concentrated of these steps, with only a single company (dutch company ASML) currently capable of providing the most cutting edge equipment required to produce the highest quality chips. Other SME providers, located in the USA and Japan, have large market shares as well but lack the capacity to produce the highest end equipment.

Chip Fabrication is also a very centralized business, with only 3 corporations having the capacity to manufacture cutting edge chips. By far the most dominant player in this field is taiwanese company TSMC, who currently are the ones most capable of delivering the most powerful chips (using ASML equipment). The two other noteworthy players in this field are Intel (USA) and Samsung (South Korea), though they currently lag in technical capacity behind TSMC.

Strategic Implications

Many aspects of the information technology sector are by their very nature hard-to-impossible to cohesively assess and regulate. Which CPUs one exactly uses to power an internet startup has, so far, been of very little importance. The computational demands of cutting egde DL systems is changing this dynamic.

More and more, whoever has access to the best AI hardware will be able to build the best AI software. As such, access to cutting edge AI hardware is of interest to policy makers and strategic analysts.

The IP embedded in the design of cutting edge chips will always have the usual difficulties of protection from espionage or reverse engineering. Designing high end chips requires great skill and effort, but is in itself hard to track and regulate, and therefor presents an unfavorable target for intervention.

SME production and chip fabrication on the other hand are extremely capital intensive, centralized and almost impossible to hide on a large scale, making them a favorable target for policy intervention.

Summary and recommendations

The future of competetive AI research and development will crucially depend on access to high end AI accelerator chips. Ensuring access to adequate supply of such chips is of crucial importance to any nation seeking economic relevance in the unfolding AI revolution.

Currently, the entire world’s capacity to construct high end chips is localized in a very small number of international corporations located in Europe-friendly nations. Despite what one may think, China and other countries without these corporations have extremely limited capabilities to produce high end chips, and, despite intense efforts to produce such capabilities domestically, are unlikely to achieve them soon.

The EU must act now if it wishes to build the capacities needed for chip sovereignity. Currently, all chips used in Europe are designed in the US and built in Taiwan, South Korea or The United States. Whether this is a desirable state of affairs is left to the reader to decide.

If the EU wished to seriously pursue chip sovereignity and ensure it’s competetive in the developing AI economy, a massive investment and development effort would be needed as soon as possible, as developing these capacities could take years or decades. The EU is in a good position to build such capacity, as it possesses a very educated workforce, a large market for its products, and hosts to the world’s leading SME manufacturer (ASML).

A heavy investment in an effort to design open hardware designs for AI accelerator chips and work to develop the production capabilities needed for them seems the most promising route for the EU. Developing these designs in an open manner (similar to the RISC Foundation) would have the benefit of allowing the EU to leapfrog its progress forward by allowing the input of many stakeholders and hopefully allowing the effort to catch up to private efforts with decades of lead within a foreseeable timeframe. As discussed in this report, such designs being open would not be a huge asset to hostile agents as these would be unlikely to have access to necessary production capacity, if the EU and its allies work to control access to their leading SME and fabrication providers.

SME and fabrication capacity presents itself as an unusually tractable and controllable target for policy. If one wanted to limit the use of AI for purposes counter to European values and strategic interests, restricting these steps in the hardware production cycle would exert enormous pressure on any such hostile nation or actor, and effectively cut them off from high end AI capacity, potentially for decades.

Blog

Further Articles

25.02.2021

Multimodality: attention is all you need is all we needed

#Multimodality#AGI

When training our AI models, what we’re trying to build is a model of reality that captures the properties necessary to perform whatever task we’re trying to do....

25.02.2021

Read more

09.01.2021

AGI and knowledge: we have ways of making him talk

#GPT-3#Data Protection#AGI#Knowledge Graphs

Big AGI models can memorize very specific knowledge. While this enables new use-cases it also creates issues of data protection as they can be used to leak this knowledge....

09.01.2021

Read more

Load more

Press & Announcements

News
Public Relations
03.03.2021
Robo-writers: the rise and risks of language-generating AI

Leahy, currently a researcher at the start-up firm Aleph Alpha in Heidelberg, Germany, now leads an independent group of volunteer researchers called EleutherAI, which is aiming to create a GPT-3-sized model. The biggest hurdle, he says, is not code or training data but computation [...]

Matthew Hutson

19.02.2021
Made in Germany – noch. Die neuen Gründer sind Deutschlands letzte Chance. Sie sichern mit avancierten Technologien die Industriejobs von morgen.

[Andrulis] ist überzeugt: „Die KI, die hinter modernen Verwaltungen und Regierungen stehen wird, muss europäisch sein.“ Nur so lasse sich sicherstellen, dass sensible Informationen nicht missbraucht würden. Und dass in diesem Geschäft auch Unternehmen mitmischen, die in Europa Arbeitsplätze schaffen und Steuern zahlen.

Dominik Reintjes, Thomas Stölzel

27.01.2021
Aleph Alpha erhält 5,3 Millionen Euro für europäischen OpenAI-Konkurrenten

Aleph Alpha will dem US-amerikanischen OpenAI ein europäisches KI-Pendant gegenüberstellen. Es soll europäischen Werten und dem Datenschutz entsprechen.

Oliver Bünte

27.01.2021
HEIDELBERGER START-UP ALEPH ALPHA: Deutscher Ex-Apple-Manager plant eine KI für Europa

Jonas Andrulis war von seiner Arbeit als hochrangiger KI-Entwickler bei Apple enttäuscht. Nun will er dem Valley auf eigene Faust Konkurrenz machen.

Christoph Kapalschinski

30.12.2020
Europa muss dieses Projekt kopieren, sonst verliert es den Anschluss

Zusammen mit KI-Firmen wie Aleph Alpha in Heidelberg und den europäischen KI-Forschungsnetzwerken Claire und Ellis, die alle moderne KI-Verfahren vorantreiben, kann Europa all das selbst in die Hand nehmen. Noch ist Zeit.

Prof. Dr. Kristian Kersting

28.11.2020
Machine Learning Street Talk #031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)

In this special edition, Dr. Tim Scarfe, Yannic Kilcher and Dr. Keith Duggar speak with Professor Gary Marcus, Dr. Walid Saba and Connor Leahy (Aleph Alpha) about GPT-3. We have all had a significant amount of time to experiment with GPT-3 and show you demos of it in use and the considerations. Do you think GPT-3 is a step towards AGI?

Yannic Kilcher, Dr. Tim Scarfe, Dr. Keith Duggar

Contact