Posts & Publications

This page lists notable posts and publications in reverse chronological order.

Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O'Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, Diane Coyle

February 2024

Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals, but could also introduce new societal risks. Our paper gives a broad overview of the properties that make compute a promising governance tool and discusses the benefits and risks of various compute governance proposals.


Link to arXiv

Lennart Heim & Konstantin Pilz

February 2024

When discussing compute governance measures for AI regulation, it is crucial to precisely define the scope of any such measures to prevent regulatory overreach and counterproductive side effects. We estimate what fraction of all chips were high-end data center AI chips in 2022 in this post

Konstantin Pilz, Lennart Heim, and Nicholas Brown

November 2023

Training advanced AI models requires large investments in computational resources, or compute. Yet, as hardware innovation reduces the price of compute and algorithmic advances make its use more efficient, the cost of training an AI model to a given performance falls over time — a concept we describe as increasing compute efficiency.

We find that while an access effect increases the number of actors who can train models to a given performance over time, a performance effect simultaneously increases the performance available to each actor. This potentially enables large compute investors to pioneer new capabilities, maintaining a performance advantage even as capabilities diffuse.

Since large compute investors tend to develop new capabilities first, it will be particularly important that they share information about their AI models, evaluate them for emerging risks, and, more generally, make responsible development and release decisions.

Further, as compute efficiency increases, governments will need to prepare for a world where dangerous AI capabilities are widely available — for instance, by developing defenses against harmful AI models or by actively intervening in the diffusion of particularly dangerous capabilities.

Preprint on arXiv

Konstantin Pilz & Lennart Heim

July 2023

Data centers are the engines of today's digital economy and play an increasing role in AI model training and large-scale deployment. This research report covers technical foundations, locations, market dynamics, and future prospects of data centers in the context of AI.