UK public-sector AI compute

The UK wants 420 AI-exaflops by 2030. This map shows what exists, what is planned, and where the gaps still are.

A policy-oriented view of the public compute infrastructure behind UK AI research, inference, and strategic capability. It combines flagship AIRR systems, backbone HPC sites, specialist data platforms, regional systems, and mission-specific infrastructure into one navigable picture.

Why this matters

AI capacity is no longer just a technical resource. It shapes who can train models, who can serve them domestically, and how resilient public services are when cloud access, energy cost, or geopolitics tighten.

Published AI-exaflops
22
21 Isambard-AI + 1 Dawn
Target by 2030
420
Government roadmap ambition
Mapped facilities
22
17 operational · 3 upgrading · 2 planned
Flagship AIRR core
2
Isambard-AI and Dawn

Where the infrastructure lives

Pan and zoom the UK map, then click a dot to open the facility panel. Scroll continues over the map; wheel zoom is disabled to keep the page easy to navigate.

flagshipbackbonespecialistregionalmission

Capacity trajectory

Quantified progress toward 420 AI-exaflops

22 of 420 AI-exaflops · 2 facilities quantified

The line includes only facilities with a published AI-exaflops figure. CPU-first systems and sites without disclosed AI-exaflops remain on the map, but not in this chart, to avoid presenting false comparability.

International context

Where the UK sits

These bars now use explicit source scopes. The US and China rows are Epoch AI frontier data-centre estimates converted from H100-equivalent hardware; the UK row is published public AIRR capacity.

  • United States: Epoch-tracked frontier data centres. 4.146m current H100-equivalents × 3,958 TFLOPS FP8 per H100 SXM ÷ 1e6. This is a frontier data-centre hardware estimate, dominated by hyperscale and frontier-lab infrastructure. It is not a public-access research-compute allocation. (Source: Epoch AI Frontier Data Centers dataset; NVIDIA H100 specifications)Not directly comparable with UK public AIRR access; useful as an order-of-magnitude strategic capacity contrast.
  • China: Epoch-tracked frontier data centres. 132,895 current H100-equivalents × 3,958 TFLOPS FP8 per H100 SXM ÷ 1e6. Epoch currently tracks Alibaba Zhangbei in China in the local dataset; this should not be read as a comprehensive national total. (Source: Epoch AI Frontier Data Centers dataset; NVIDIA H100 specifications)Likely incomplete as a national estimate.
  • United Kingdom 2030 target: Government target, not current capacity. UK Compute Roadmap target for AI research and innovation compute. Included to show the scale of the stated UK ambition against current quantified public capacity and frontier data-centre estimates elsewhere. (Source: UK Compute Roadmap)Future target, not deployed capacity.
  • United Kingdom: Published public AI research compute. Isambard-AI (~21 AI-exaflops) + Dawn (~1 AI-exaflop). This is the currently quantified public AI compute core represented in AIRR disclosures. Other UK systems appear on the map but are not assigned AI-exaflops without published figures. (Source: UK Compute Roadmap; UKRI/AIRR facility disclosures)Public-access research compute, not total private or hyperscale UK AI infrastructure.

Methodology caution: the US and China bars are peak-equivalent hardware estimates for frontier data centres, while the UK current bar is published public research-compute capacity. They are shown together to convey order of magnitude, not as like-for-like national public access totals.

Translating the headline number

What 420 AI-exaflops could mean in practice

The roadmap's headline target only matters if it turns into usable outcomes. These examples translate abstract compute into concrete public capability.

Forecasts trained once, then used everywhere

Dawn already supports IceNet, a sea-ice forecasting model that can run on a laptop after training. At 420 AI-exaflops, the UK could train many more national forecasting models for weather, oceans, and biodiversity instead of treating each as a one-off.

Source: UK AI compute mapping PDF, section 6.4

Medical imaging models can move from pilot to service

Cambridge researchers use Dawn to train a kidney-cancer screening model. More sovereign AI compute means more room to iterate, validate, and serve models across multiple NHS pathways rather than only a small number of pilots.

Source: UK AI compute mapping PDF, section 6.4

Petabyte-scale environmental policy becomes routine

JASMIN combines storage, high-memory nodes, and GPU partitions to analyse huge environmental datasets. More national AI capacity would make weekly risk modelling for flood, heat, crop, and ecosystem policy easier to sustain.

Source: UK AI compute mapping PDF, sections 2.6 and 6.4

Public services can keep inference on shore

The AIRR already gives the UK a sovereign inference base for public research and start-ups. Reaching 420 AI-exaflops would make it more realistic to run multiple major departmental and public-service models domestically instead of leaning on foreign cloud capacity.

Source: UK AI compute mapping PDF, introduction and conclusion

Methodology

How the map was assembled

Expand

Facilities are included when they are publicly funded, publicly accessible, or strategically important to the UK public compute ecosystem. The primary source is the local research PDF and the operator links cited inside it.

The map uses institution-level coordinates rather than precise machine room locations. Where multiple facilities sit at the same host site, the explorer clusters them into a single point and cycles through them on repeated selection.

The trajectory chart only counts facilities with a published AI-exaflops figure. This intentionally excludes CPU-first systems and sites without disclosed AI-exaflops, because combining them into a single quantified curve would imply a comparability the source material does not support.

The country comparison deliberately separates source scope from the bar value. The UK current figure is the published AIRR-style public research-compute capacity that can be quantified today (Isambard-AI plus Dawn). The US and China frontier data-centre figures are derived from Epoch AI's Frontier Data Centers dataset by summing current H100-equivalents and converting to peak FP8 AI-exaflops using NVIDIA's H100 SXM specification of 3,958 teraFLOPS FP8 per H100 SXM. This is not a like-for-like public-access comparison; it is an order-of-magnitude strategic capacity contrast.

Last updated: 2026-04-23