Compute is not the answer to AI sovereignty with Hamish Low

Tech Futures Project • November 11, 2025

View Original Episode

Guests

No guests identified for this episode.

Description

In this conversation with Hamish Low we discuss his recent article ‘Compute is not the answer to AI sovereignty’ . Hamish is an AI Policy Fellow at IAPS and I think provides a really nuanced way to think about how the influence of AI on power dynamics might be distributed. Transcript below! Philip Bell: Hamish, you wrote this article as part of the Center for Governance of AI summer fellowship. You’re now working as an AI policy fellow, which is really cool. And I guess it’d be really useful if you could just lay out the main argument of your article, which is called “Compute is Not the Answer to AI Sovereignty.” Hamish Low: Yeah, definitely. Thank you very much for having me on. It’s great to be here. Yeah, so this is work I did at the Center for Governance of AI for three months, really trying to explore these ideas of AI sovereignty, basically trying to get to the bottom of what people actually mean when they’re talking in this space about AI sovereignty. I think there are not that many good answers once you start trying to scratch the surface. So one of the first things I tried to do is just get a better definition of what AI sovereignty actually is, because it’s a bit of a wishy-washy term. I think it’s trending towards becoming a bit of an “everything is in” term, where you just sort of add “sovereign” in front of literally any AI thing you want to do, so it just sounds more serious and strategic. But actually it just isn’t the case—lots of things are not sovereign AI and are still good, but we shouldn’t be putting everything into this bucket. But I think really the way you should think about sovereign AI is that it’s this big picture strategic dilemma. In the AI value chain, the two dominant players are just by far the US and China. Both of these two just have the strongest positions, they have the frontier models, they’re producing the AI accelerators, they’re way ahead of everyone else. So for every other country in the world, you have to figure out how do we fit into this world where there’s two very dominant players. And I think ultimately this is just a generic strategic dilemma. And the answer will be different for any individual country. So each country has to figure out for itself what sovereign AI means, basically based on what they want to accomplish. For lots of developing countries who are trying to pursue export of digital services, this AI situation could be an existential risk to their development model, in which case it’s all primarily economic. It’s just like we need to salvage, make sure that the US doesn’t just re-onshore all of what we’re trying to do through AI. I think for the UK, it’s slightly different. I think we are in a position where we can sit sort of in between the US, China and the EU. I think we’ve built up really impressive state capacity within the UK government around AI. And I think ultimately we have a much more ambitious goal, which is being able to freely regulate AI to help shape AI’s development on a much more fundamental level. And ultimately it’s about exercising freedom of action in terms of how AI actually affects UK society and economy. And this is a much more ambitious aim and it’s much harder to achieve than if you’re just looking for economic benefits. And importantly it’s quite separate from economic benefits. You could have a lot of influence on the development of AI but not really get any of the benefits and you would have still achieved your AI sovereignty goal. So I think that’s how I see it for the UK. You have this AI sovereignty goal that’s about achieving freedom of action. And I think ultimately that boils down to you want to influence the US. We’re going to be closer to the US than we are going to be to China. The US is just the clear leader and the key hub here. And so for the UK, the way I define AI sovereignty is it’s about creating interdependence with the US as opposed to where we are right now, which is this kind of one-way dependence relationship. We’re essentially getting all of these AI products from the US and we have very little stake in how these are being made and how these are being deployed. And what we need to do is develop capabilities in the UK that the US comes to be dependent on, such that the US understands that we’re in this interdependent relationship, that there are important things that they rely on us for in AI. That means that when we reach some kind of crunch point, when we’re debating regulation or how this technology should be developed or safety issues, that they will have us in the room, that we clearly have an important stake on this, and that we have some leverage over them. That if they want to cut us out, this would be bad for them because they’re relying on us for important aspects of how they’re able to deploy AI and use AI within their own economy and military and government. So I think this is ultimately the most powerful frame—trying to get towards this interdependence situation. The challenge is that this is incredibly difficult. The thing is that everybody wants their own ASML, the Dutch chip semiconductor manufacturing equipment maker, who has this insane monopoly over the most advanced lithography machines that are essential to making advanced AI accelerators. The issue is that there’s not lots of these lying around. ASML was the result of this incredible process of decades-long consolidation in the uniquely neoliberal 1990s. In 2025, that’s not the case. Everybody is trying to find their niche in the AI value chain. Everybody wants their ASML. And so it’s very hard to find one. So I think a lot of AI sovereignty discussions in the UK are a bit just hand-wavy in terms of what’s actually valuable for us to build. A couple years ago it was like, “BritGPT, we should be training models.” I think people now understand probably that this is not as good of a model and this doesn’t actually really get as much leverage. And then there’s questions around like, okay, so Nvidia chips are the big deal. We want our own AI chips, which is fine, you can make a case for this. There are cool startups in the UK working on this. And I think this is good. But I think it’s just not clear that this is useful from an AI sovereignty perspective. I think you can make the case that this is economically good for the UK. If we have big successful chip makers, this is good for the UK in general. But I think it just isn’t very good for the UK’s AI sovereignty. Because if you’re trying to create this interdependence, you just don’t really get this if you’re trying to sell the US AI chips. Because ultimately, the US has other AI chips to buy. Most of the other key players here, whether it’s Nvidia, AMD, Google, Amazon, they’re all doing this themselves. And even if you carve out the specific bit of the value chain, if you say we’re going to make our own AI chips, this does not give you in any way independence in this value chain. You’re using all of the exact same supply chain aspects that everyone else is using. You’re getting them manufactured at TSMC, you’re getting South Korean high bandwidth memory, you’re having to buy networking switches from this random Taiwanese company. You’re incredibly dependent on the rest of this value chain, which functionally is very dominated by US regulatory regimes and the foreign direct product rule and their ability to put export controls on these capabilities. So even if you’ve built an AI chip, this in itself doesn’t give you any sovereignty. The US could cut you off from this wider supply chain and you’re screwed. There’s nowhere to go. You can’t make any more. You’re still very dependent on the US. And you’re not generating something where the US is dependent on you. So I think this just doesn’t have a very clear AI sovereignty advantage. And I think this is true for just a lot of aspects of the AI value chain, whether you’re talking about building data centers in the UK, which I think is another useful initiative. And there’s certainly lots of merits of having data centers and AI compute in the UK. It lets you build out your public compute resources. It is investment, which is always nice from an economic perspective. You can build up some UK companies like EnScale, who’s building out with OpenAI in the UK. This is good, but again, it doesn’t solve your AI sovereignty question. It’s nice for other reasons, but putting a bunch of Nvidia chips in a very fancy warehouse in the middle of the UK is not making the US dependent on you. You’re just strictly a customer here. And it has other benefits downstream in how you use that compute. You can use that to build new products and do new things with it. But it doesn’t solve this core question. So then I think what you need to think through is, okay, where actually do we get leverage? And where can we build this dependence? I think ultimately the view I’ve come to is you have to look at what are the future nodes of the AI value chain that are going to be important. Because for almost all of the existing areas, there’s just very powerful US incumbents that it’s really hard to compete with. Even if you deployed the most incredible industrial policy the UK was capable of, I think you would struggle to really generate US dependence. So yeah, this is where I then get into this idea of AI middleware, which I can walk you through. Philip Bell: That’s super interesting. I think that’s such a useful phrase—interdependence rather than independence. I think that is a really kind of both clear but powerful encapsulation of your argument. When you were talking about the chip sovereignty idea, it made me think of Graphcore, because I think they were kind of an early competitor with Nvidia and they created kind of IPUs. And it kind of reminds me of the nuclear energy competition, because I think that Britain had their own kind of rival form of nuclear energy to the US in the form of gas-cooled nuclear reactors. The idea’s got kind of interesting historical parallels, but yeah, it’s a really interesting point that actually that isn’t really what we should be focusing on at all because it doesn’t even create interdependence if we were to create a Nvidia competitor. And it also makes me think of the Jeffrey Ding book, the recent Jeffrey Ding book about fusion rather than kind of coming leading sector. But yeah, I think it would be really useful if you could lay out that idea of AI middleware or where you think the UK could, or any country I suppose, could kind of become interdependent. Hamish Low: Yeah, definitely. And the Graphcore case I think is interesting. And yeah, they spent like 700 million pounds and they made like 4 million pounds in revenue at their peak, which to be fair to them is just bad market timing and you just go down the wrong direction and these startups are hard. I think now lots of the team from Graphcore after it went bankrupt and then got bought out by Softbank is now this new AI chip startup, Fractile. And it’s all very cool and I wish them the best, but yeah, they just don’t solve this strategic question we’re trying to solve. And I think that’s where something like middleware is maybe the better way to be thinking about it, which is basically here, I’m thinking through all of the future things we would expect to emerge that sit in between—if you’re going down a sort of value chain, if you imagine it and you’re moving down the semiconductor supply chain to making the AI chips, to putting them in data centers, to training the foundation models. Right now, there’s not very much below that. It’s kind of like OpenAI trains a model, they put it into ChatGPT, you use ChatGPT. This is a very simple, vertically integrated setup. But I think you should expect this to just become much more complicated in the future. And for many more parts of the value chain to sit between the foundation model and the end use. Where the end use here is almost certainly going to be an enterprise user, because this will be the much bigger market over time than the current consumer-dominated chatbot products. And just enterprises are going to have loads and loads of really different needs in terms of how they integrate these systems and build new workflows around them and do all these interesting things where they’re actually automating stuff and diffusing these deeply into their processes in the way that we actually get all the productivity gains that we want out of this technology. So a few of the things I would put in this middleware bucket would be if you’re taking models and you’re altering them in various ways. So this could be fine-tuning models on specific datasets, you’re a downstream developer where you’re taking the OpenAI model, you’re then just using other data to change it to suit it to what specific task you’re trying to achieve. And then you’re either vertically integrating that into your product or you’re selling that on to someone else who’s putting that into a product that they’re then selling to an enterprise. Or you have this new mode of reinforcement learning that’s become very powerful. Lots of the frontier model companies are pursuing building these reinforcement learning environments to make models much more capable at very specific tasks. The most successful one so far being coding tasks. Where you’re able to just get lots and lots of gains by having models do this reinforcement learning process. If this process is really effective, which it seems to be, you would expect to want to do this for basically every other sector of the economy by the end of getting to 100% automation or whatever crazy future endpoint. That you’re going to want to find the right data and you’re going to want to find the task and you’re going to want to train the model to be just superhumanly good at doing that task. OpenAI and Anthropic and Google simply can’t do this work for every sector of the economy. Eventually, they just become too big, they become too unwieldy, there’s regulatory barriers, there’s recruiting the actual people to do this thing. Apparently, there was some article, OpenAI has hired 100 old investment bankers to try to get data to do better financial modeling. But it’s just hard to do this for every industry on the planet. Philip Bell: I mean I was really surprised—I’ve been quite surprised how many different industries OpenAI are trying to target. They’re like, I saw they’re trying to make an animated feature film which seems just really surprising. I have no real idea why they decided to do that but it’s intriguing. But no, I mean the general point I totally—I mean that just seems to make a lot of sense that there needs to be kind of specific parts of society that are specific verticals that integrate AI into the business process in kind of specific deep ways. And I guess, yeah, it tallies with a lot of kind of historical research about how technological change happens. It takes quite a long time to show up in productivity and involves changing business processes, et cetera, which yeah, so that’s really interesting. And I guess specifically for the UK, because this kind of AI middleware idea—I’ve always thought the UK is in a great, when I say great position, I always thought the UK is in a central position in terms of AI, given that DeepMind was started in the UK and it’s arguably one of the most, well probably the most successful AI company in terms of the range of different breakthroughs they’ve kind of—I mean, in terms of AI research, it’s probably potentially the most successful in terms of all the different breakthroughs, like AlphaFold, AlphaGo, et cetera. And there’s also just a lot of important organizations like Stability and Eleven Labs, organizations like that and Anthropic and OpenAI have offices in London, for example. So I’ve always kind of assumed that the UK was in a kind of quite pivotal position, but I think reading your article actually made me slightly rethink that, in quite a useful way. But I guess on this idea of AI middleware, do you think that the UK specifically is in kind of a better position to be able to come in and be part of that kind of AI middleware because of the fact that it does have essentially a lot of expertise, I suppose? Hamish Low: Yeah, definitely. I think the UK’s strength is being really, really central on talent, right? We do really well on talent. We’re probably the best AI talent hub outside of the US and China. Our key weakness is that we don’t have very much compute. We don’t have lots of this infrastructure based here. We haven’t managed to have one of the leading frontier companies based here. DeepMind sort of being the weird exception where it used to be more UK weighted and then they merged with Google Brain and now it’s a bit harder to tell how you should understand them. But I think this means that we have lots of talent here, but we’re not necessarily getting a lot of benefit out of that talent. And we’re not necessarily getting benefit in strategic terms from having that talent here. You want to turn this into some more uniquely UK capabilities. And this comes back to some very long-lasting issues with British tech in terms of startups struggling to scale, often just selling to US buyers, chasing the US market instead of the UK market because of just slower growth in the UK and regulations and et cetera. But yeah, I think middleware is one way that you lean into this talent advantage of what the leading frontier companies of the US are doing is this big bet on huge infrastructure. They’re building out colossal amounts of this AI infrastructure just gigawatts and gigawatts. There’s just constantly more deals. The UK just clearly cannot keep up with this. And I think it’s not worth trying to keep up with this. But we can pursue this different approach where lots of these middleware capabilities, you don’t need that much compute ultimately. Often you’re just kind of building software systems on top of these models to make them more useful. So to give one example, maybe—lots of models right now, a big evolution this year has been model context protocol and the ability for models to natively use tools as they accomplish a task for you. And right now this is just existing random software tools or just the ability to use a sort of coding application natively as it works through your answer. But presumably over time you’ll want to build tools that the models can use that are entirely separate from the models, but which are entirely AI native. It might be that for the telecoms industry, you have a tool for monitoring the health of a network that is functionally just a software tool that does this job, but just in a way that is built for the model to be able to use it effectively. But you don’t actually want the model to probably be zooming through every aspect of BT’s network trying to find some fault. You want it to be able to access an actual system dedicated to do this and be like, okay, yes, check. This is all good. I can move on to the next stage, whatever I’m working through. And that’s a business right there. You can build that tool and you can sell that tool. And I think that’s one of the things where I’m envisioning middleware here. And so I think this is one where if you just have lots of talent and you just get the right framework around being able to have them create startups, build these capabilities and sell them on, that you could create this just web of UK software focused companies, which means we can build products faster. They can scale them faster. We can actually create this interdependence much faster than if we were trying to build hardware capabilities where it just takes a decade to create something at the end of a long R&D process and get this to market and get it to scale. That actually you can move at the pace of AI development in this sense by being very software heavy and avoid this compute heavy models, but instead we’re trying to build on these and create useful things that just allow us to use these models in the real world. And I think, yeah, this leans into this talent heavy approach that I think is the right one for the UK. Philip Bell: That’s really interesting. Is there kind of a government policy that would help with that? Or is that something... or there are other kind of conditions that would maybe stimulate that? Hamish Low: Yeah, and I think the government has been doing some very useful things here. I think the Sovereign AI Unit is a very interesting project and has been doing some good work. They have 500 million pounds of budget to spend on achieving this aim. I think this is good. I think the AI Security Institute is a hugely helpful thing here in terms of this is one area where you’re leaning into this talent advantage. You’re just getting a really good cluster of talent within the government, building the government’s state capacity, but then also just hopefully building up an agglomeration of different things around this. So one thing they’ve done is they’re looking into the AI assurance industry as one potential useful area where the UK has an advantage and DSIT has given some money to this and it’s very much on their radar. My view is basically you should just be doing all this but just doing it much more, that I think there’s all this work happening, but there’s insufficient urgency behind it for how quickly things could change and how powerful the capabilities could get. I really want the UK to be having this leverage, not in 10 years, but in two years or five years, that I think you could have very important decision points in the near future where I want the UK to have this leverage, which means I think we just need to be much more aggressive in doing industrial policy in a way that the UK government has not traditionally done. And I think this is just—you basically have your key levers, which is capital and talent and compute, where compute is kind of a function of capital. And you just need to go super hard on these. So you have the Sovereign AI Unit. I think basically you should also create a national AI investment fund and you should create another fund and then another fund. And you should be very okay with just duplicating all these different government investment vehicles and being like, taking this almost much more Chinese approach to industrial policy of just opening a floodgate of capital and just being very willing to just be—we’re going to change this market deliberately. This is what we’re trying to do. We’re just going to try crowd in much more competition to these sectors we think are strategically important. And we’re going to do this in a really big way, which ultimately has a lot of downsides. You know, it ends up being—you will create waste, you will make mistakes, some of these companies will fail, you have no guarantee of success. But I think if you really understand the urgency I think we should have here, that you’re willing to do this, you’re willing to put a lot of money after this goal. Same thing on talent. I think that if the Sovereign AI Unit invests in you and has done their due diligence, they see you as strategically important. You should just be able to hire the talent you want. You could just have a visa where you, startup, decide you want to hire someone and as part of that process, they get a visa to come to the UK. You just skip all the Home Office fees, you skip all the Home Office vetting, pretty much, and you just delegate this power to the firms that you think are really important. And you could have this be just a super limited scheme, but I think this is one way of just speeding everything up and just doing everything we can to bring talent to the UK. As well as just kind of other interesting ideas around the Center for British Progress has this idea for a kind of Center for Talent Excellence of deliberate headhunters in government who are trying to recruit the best AI engineers into the UK. I think these are great ideas as well. I think you basically just want to be maximally aggressive and just pulling these levers of capital and talent to just bring as many people to the UK and remove every barrier to them creating the companies that we want to exist. Philip Bell: Yeah that’s a really interesting point because I suppose right now given some of the kind of restrictions on studying in the US and stuff like that I imagine there’s lots of PhD students who might be interested to come to the UK. It also reminds me of—because I mean I know Joel Mokyr just won the Nobel Prize for economics and as far as I understand, one of his main arguments about why Britain was central to Industrial Revolution was this idea of tinkerers and people who... a kind of dispersed network of different intellectual societies, like people from the Birmingham Lunar Society who went on to tinker with different types of technologies, which became hugely influential. I sometimes wonder about that, like is there something around supporting a dispersed experimental culture? And maybe the best example I’ve seen of this is maybe Kaggle. I don’t know if you know Kaggle, but it’s an online data science competition thing. And like, I’d feel like that’s a kind of—I mean, yeah, that’s one example of kind of a tinkering sort of experimental culture. So yeah, so I think it’s a really interesting idea to lean into the UK’s kind of wealth of expertise and kind of capacity of the people. I think that’s really interesting. And I guess because, but one thing I was going to ask about what you were saying about the kind of funding. I thought that was a really interesting point about basically just providing more of it and it’s fine if there’s overlapping kind of pools of funding. And I also thought your point about the AI assurance industry is really interesting because I mean it does feel like the UK has a lot of important kind of AI safety and AI governance organizations. And but one thing I was going to say is with that kind of funding policy, some people might say, okay, we shouldn’t be picking winners. I have heard Tim Wu, I think he’s called Tim Wu, he’s an American kind of academic slash policy—I think he’s involved in the Biden administration, but he has the argument that you can pick sectors and ecosystems kind of thing rather than picking particular companies. What would you say to that? Is that potentially a challenge to funding, even funding the kind of expertise and stuff and also things like funding a more thriving safety and assurance industry kind of thing. Is that, do you think that’s a challenge? That thing of like maybe picking—it’s hard to pick the right ones. Hamish Low: Yes, I think this is just always the challenge with industrial policy. I think maybe a counter example of who I think is doing a bad job on this is France or Canada, where both of them have kind of landed on national champion model development companies, where you have Mistral in France, where ASML invested in Mistral, and they’re getting lots of support from the French government. And it’s like the French government wants to preferentially use their models and this kind of thing. You have the same in Canada with Cohere, where the government has given quite a lot of money, hundreds of millions of Canadian dollars, towards helping Cohere build out AI infrastructure in Canada. Clearly, viewing this as a national champion firm, I think this is a really bad idea. Both these companies have demonstrably fallen quite far behind the frontier in terms of the actual models they’re producing. They’ve fallen really far behind in the infrastructure they have access to, and they’re really far behind in how much actual money they’re making, I think like 100 to 150 million euros slash Canadian dollars when OpenAI and Anthropic are charging heads to tens of billions of dollars worth of revenue a year. And this is just a complete disparate relationship. And the frontier model developers are just going to be on this insane flywheel and just be way too far ahead of you. And they make models that are way better. They make their open source models an afterthought and the open source model is better than your closed source one. Like your business model is kaput. So I think this is a really bad case of them picking winners. I think this sector approach is much more interesting. Picking another one from the recent Nobel winners, Philippe Aghion has this really interesting paper about Chinese industrial policy where he identifies one of the successful features of Chinese industrial policy is this kind of picking sectors where they pick a sector like electric vehicles, where they decide that this is productive and strategically important. And essentially by just funneling loads and loads of subsidies into the sector, they essentially just create much more competition because it encourages so many new entrants into this market that suddenly you just have this incredibly competitive market, which can go wrong. You sometimes end up where they leave the subsidies in too long and get this kind of involution—insane price competition where nobody makes any profit and everybody’s very sad. But the core principle I think is good, that you can pick out sectors that are strategically important or that you think are economically important. And you can do this process of just making lots of capital clearly available. And this just crowds in lots of new entrants and makes this market more competitive. I think this is the vision for how this goes well in the UK, is that you identify some of these sectors you think there’s a lot of opportunity here. We want to put a lot of capital on the table that we’re willing to basically subsidize the market here in a sort of anti-market way. You’re trying to shape the market to not develop how it otherwise would. With the goal of crowding in just loads and loads of new players who are then going to be competing with each other very intensively, you’re going to get these kind of talent agglomeration effects, which I think is one way you get these kind of useful networks of talent, is that you just have a shared sector and you have a lot of capital available so that people are like, I’ve just done my PhD, I don’t want to move to the US, I’ll go to the UK, there’s loads of money around, I can get a job working on this cool AI sector. So you get these networks of talent, you get these new firms, you get a very competitive market, and then the hope is that out of this very competitive market, you then just get very innovative, strong firms who are able to scale up and then become globally competitive players. And ultimately then, this means the US will start increasingly buying their stuff. And then hopefully, you have enough Fortune 500 companies that are buying their software solution for whatever this AI issue is they’re trying to solve. And this is what’s giving you this kind of interdependence. Philip Bell: That’s really interesting. Yeah, I’d like to look at that article actually. I always think it’s—I mean, I kind of follow the kind of China Watchers community. And I think as far as I understand from listening to some of the podcasts like China Talk and other podcasts like that, I think one of the kind of insights that they have is that policymakers in the UK and the US and other places don’t really pay enough attention to what’s actually going on in China and don’t really understand China enough given its importance. And I do always think, because whenever we make policy—or at least whenever I hear discussions about policy, it seems all of the time we kind of take ideas from other European countries, which I guess makes sense because we’re a similar size, maybe. But yeah, I always think it’s interesting that people don’t seem to, or at least people that I hear talking don’t seem to take as many ideas from Southeast Asia or China. One thing I was going to—yeah, one thing I was going to ask though is, are there any countries that you think are doing this well? Like they’re approaching AI sovereignty well and maybe creating some sort of interdependence. I mean, because I thought the France and Canada examples were really interesting about—I hadn’t really thought about that actually about how Canada is supporting—I didn’t actually even realize that Canada was supporting Cohere and then the France supporting Mistral, that was a really interesting point. But yeah, are there any countries that you think are doing well? Hamish Low: Yeah, I think it’s basically just countries that have been able to figure out what their advantage is and be able to capitalize on this in some way. So I think Singapore has done very well here, which is partly a result of the fact that Singapore, I think, just has really good state capacity and had already done lots of work around getting data centers up and their industry, building up themselves as this key hub between the US and China. I think they’ve basically just been doubling down on this strategy that makes a lot of sense for Singapore. That your entire gambit is that you’re sitting between the US and China. You’ll be a really good talent hub. They have their own national AI strategy 2.0. They’re trying to triple their AI practitioners in Singapore to 15,000 people. And I think this is just robustly a smart strategy. If you’re Singapore, this is the strategy you want to do. And they’ve just been very successful—Singapore and now Johor in Malaysia just across the border is the key AI data center hub in all of Southeast Asia. There’s huge investments by ByteDance working with Oracle in Malaysia. Singapore has become a really key hub here. They have the Singapore AI Safety Hub. This is just, I think, one example where you have a middle power that just has a really clear sense of what they’re trying to achieve and just has a state that’s very focused and willing to achieve this. And I think this is going well. The UAE is another interesting one where I think, again, they sort of understand what they have to offer, which is loads of capital and loads of energy. Realized that these were two things that everybody else wanted and capitalized on this quite well. You know, they signed all these various deals with the US, all the kind of major US AI players coming and building out data centers in the UAE. I think once you sort of get below the surface of it, it’s slightly less clear how good this is for the UAE. They were very, very smart in that they timed their announcement really well, where when they announced all these different data centers, they were the largest ones in the world that have been announced. And subsequently, much bigger ones have been announced in the US that all come before the ones in the UAE. But still, everyone has the impression of, wow, the UAE’s at the frontier, which they probably aren’t. And the business model of how good it is to basically just be a landlord for a giant warehouse in the desert is slightly questionable. But I think it’s still a good strategy and clearly gives them leverage. And they have some interesting ideas around exporting AI to the global south. And this is—it’s a strategy and they’re making lots of bets on it. They’re making the right bets, I think, if you were trying to carry this strategy off. And I think this is—they clearly have a sense of what they were trying to achieve and they’re working to achieve it. And I think otherwise it’s just—it’s hard to find countries that have a sense what their advantage is, partly because for most countries they don’t have much of an advantage. This is an incredibly sophisticated value chain that just doesn’t flow through that many countries. You have a lot of challenges as well of a lot of what you really care about is talent, but talent is very mobile. So it’s a big issue for Latin American countries where you might have good AI researchers, but they’re all just going to move to the US or Europe. It’s hard to build up a sort of domestic base and work from there, which actually makes these kind of AI sovereignty discussions really, really difficult. You’re really struggling to build any advantage, let alone build a strategy to capitalize on that advantage. Philip Bell: That’s really interesting. Yeah, I mean because to your point earlier about every country wants to have an ASML like the Dutch do, there are—I mean because Germany and France I believe do actually have quite important companies in the kind of AI supply chain. I believe that Germany has the Carl Zeiss SMT—Zeiss, the optics and laser which I think Germany is always quite strong in terms of making glass. They’ve got a long tradition of... I actually have a friend who goes to this, he’s an artist, but he goes to this glass making conference in the Black Forest every summer. So I think there is kind of—it’s probably like an ancient, not ancient but really old tradition of kind of German expertise in kind of glass which is kind of interesting that they do seem to make lenses for ASML and stuff like that. And I think they have chemicals which again is a really old German industry like the chemicals companies and stuff. But yeah, as you said, there are only a handful of companies. It’s basically France, Germany, the Netherlands, and then South Korea maybe, Japan. But other than that, is that right? They’re kind of the countries that do actually have companies in the supply chain. Hamish Low: Yeah, because I think right now, the question of leverage is just very tied to the semiconductor supply chain. Most countries don’t have a lot of presence in the semiconductor supply chain. Yeah, South Korea, Japan, Netherlands, Germany, they really do. And that just sort of gives you a leg up. Philip Bell: Yeah. And it’s interesting because I suppose going back to your point about the UAE had worked out that their advantage is, or part of their advantage is that they have loads of energy, loads of fuel based energy. I mean, do you think that the UK should accept that the UK relative to other countries doesn’t have an energy advantage, I suppose? Do you think the UK should kind of just accept that or do you think—I don’t know if this is very naive, but I have heard some people arguing that AI is an opportunity to build really comprehensive renewable energy infrastructure. Do you think that’s a kind of realistic possibility that a country like the UK should be considering? Hamish Low: Yeah, it brings me great pain, the energy situation in the UK. I would wish it were not how it is. And I do think that AI is an opportunity to try to change this. But right now we’re just in a very bad position on energy. This is very bad for AI. It’s also just very bad for loads of other industries and lots of other reasons. And I think there is a good case that AI is—it helps you encourage lots of these more innovative energy solutions. It kind of is a bit of a forcing mechanism to help you confront some of these just long running—the grid is super slow and we need to spend lots of money on expanding the grid, on building interconnections. We need to just build out lots more generation capacity of renewables. I’m very supportive of trialing new things like small modular nuclear reactors and just building out more of the kind of traditional nuclear power stations. I think all of this is really good. The issue is it’s just—even if you speed it up to my dreams of how fast it could be, it’s still quite slow. You’re trying to compress a real decades long project into maybe a decade if you’re being super ambitious. But we’re just coming from a really weak position here. UK industrial electricity prices are over 300 US dollars per megawatt hour compared to 82 in the US. This is just really tough. You’re just dealing with way higher prices and you’re dealing with just this long accumulation of issues and challenges. And yeah, I think AI is a good forcing mechanism to help us try solve some of these, but I think the UK cannot rely on fixing these problems in the medium term. I think we keep working at them, but if we’re trying to build our strategy for trying to achieve things in the world, we can’t rely on solving these problems because I think they’re just so difficult. Philip Bell: Yeah, is—I mean, yeah, because partly I am excited about the idea that it could be an opportunity to build renewable energy infrastructure. Because I think solar is, I think I’m right saying that solar energy is one of the kind of quickest forms of energy to build capacity in. And also wind energy is quite quick as well in terms of kind of getting everything improved and stuff. But yeah, it, as you said, I guess the UK is kind of coming from relatively a position of disadvantage. My girlfriend’s dad lives in Dubai and one of his favourite questions, well not one of his favourite questions, but something he quite often asks me is what’s the energy price per R in the UK? And it’s like 20 times more than it is in Dubai. Yeah, I don’t know, I’d hope that it could be an opportunity, but yeah, it’s a good point about how long it takes to build the infrastructure and that’s one of the considerations. I actually read about your article on Jeffrey Ding’s substack, which is called ChinAI, which is a really interesting substack about articles from China about AI. And they seem to be a country who have leaned in most to thinking about their energy system in relation to AI. I guess, I mean, this is a relatively speculative question, but do you think there’s anything that the UK can kind of learn from that? Because I saw recently they had in their China AI plus policy, they have an AI plus policy for every different industry. Or maybe not every industry, but they definitely have an energy one. But yeah, do you think there’s anything that the UK could learn from what they’re trying to do? Hamish Low: Yeah, I think in terms of just making clear plans, having them be ambitious and then just really ruthlessly executing on them is just really strong. It’s just back to these industrial policy questions. I think they do this well. I think this has clearly really paid off in energy. China has this insane energy advantage on AI now relative to everyone else. I was looking at some of the stats recently. It’s like they built 278 gigawatts of solar in 2024, which is the entire UK capability 3.7 times. That’s a lot. It’s 3.7 UK grids they’ve built in a year of just solar. And they’re growing their grid at just such a rapid rate that it’s giving them a huge advantage here. Where right now the US is doing really well on AI infrastructure and energy, largely because there’s lots of just stranded energy from US deindustrialization. So you can get a lot out of basically just all these random plants in the Midwest that no longer make cars but there’s still some legacy transmission infrastructure that you can capitalize on and just very quickly build up new data centers. The issue for the US is then in a couple years they start to run into actual bottlenecks of you’ve used up all this easier spare capacity lying around. You have to do much more difficult things to actually grow your grid and connect all these new energy sources to the data centers and there’s lots of interesting solutions people are talking about of building behind the meter energy, of doing some ways to reduce peak energy demand such that you just shut the data center down for a couple days a year when it’s super cold or super hot and when the grid is under peak strain. So there’s stuff there. But then the advantage for China is that they’re scaling their grid by a huge amount every year. So when you’re like, oh, we need to add another 10% for AI, they’re like, ah, sure, why not? We can build more solar panels in the Gobi Desert. And I think they just have such a big advantage here. The issue for China, if we’re kind of diving into Chinese AI, is they don’t necessarily have the compute and some of the other pieces to bring this strategy together. Their AI plus strategy, I think is very interesting. And I think the kind of adoption focused mindset is very smart. So yeah, Jeffrey Ding was my supervisor for doing lots of this work and his book is great on this in terms of general purpose technologies, the really important process for national power is you need to diffuse them very widely through your economy and do this in a very deep and meaningful way to unlock the productivity that just accelerates all your other aspects of national power. I think the challenge is that this is just hard to do. The targets in their AI plus strategy are really ambitious. It’s 70% penetration of AI by 2027, 90% by 2030. One, it’s unclear what this means. Two, it’s unclear if you can actually do this. There was this whole DeepSeek craze earlier this year where suddenly the central government goes, everyone needs to adopt DeepSeek. And there was this very interesting article Jeffrey translated that was telling the story of how this actually went, which was people bought these DeepSeek in a box where it’s a big AI chip that just has DeepSeek already running on it. And all of these state-owned enterprises, they buy this thing because they’ve been told they need DeepSeek. And they use the thing for a week. And they realize that actually DeepSeek v3 is—it’s a fun model, but it’s not that useful. You know, there’s a reason that we haven’t seen all the crazy productivity gains yet. Just having this model does not unlock incredible AI gains. And I think basically the vibe of the article was that now all these just sit in a cupboard somewhere. So you’ve ticked the box of adoption. But the challenge is that adoption is really hard and you need to bring together having really capable models with having large scale compute infrastructure and having just a very sophisticated cloud business and digital economy, which is one area where China actually is still quite far behind. In that their cloud market is just generally much lower quality than Western cloud offerings. The adoption of digital cloud services is just a lot lower. You have this situation of there’s a very advanced tech industry in China—Alibaba, Tencent, Huawei, these are leading incredible tech companies. But China is huge, and most Chinese companies are not like Alibaba, Tencent, and Huawei, where actually the median US Fortune 500 company is much more technologically sophisticated than maybe the median Shanghai Composite Index 500 companies. And this is a big issue for China. So I think there’s definitely things to learn in terms of having this very adoption focused mindset. But then we can also kind of learn from, what do we think is actually gonna hold them back? Which is really having this just sophisticated firms, having the compute availability, having access to the best models, which is where actually we can do all these things. Philip Bell: That’s really interesting. I did not know that about the DeepSeek in a box and it kind of makes me think about generally in this space. One of the differences that I’ve heard people talk about between building AI infrastructure compared to building other technological infrastructure in the past, like fiber optic cables for internet or train lines or electricity infrastructure is that the GPUs go out of date really quickly. Kind of like DeepSeek in a box kind of goes out of date quite quickly potentially if there’s new models and it kind of—I feel like it does also kind of add weight to your argument about AI middleware can be kind of lighter weight and it’s more kind of adaptable to a relatively uncertain sort of situation where things are changing quickly and hardware can kind of go out of date quickly. So yeah, I think that’s a really interesting anecdote. I didn’t know about that. I also think when you were just talking about the US, one of the reasons they’ve been using this spare infrastructure from deindustrialization. That’s another interesting one because it would be amazing if AI could support a reindustrialization kind of program in the US or elsewhere. But the problem I suppose is that data centers don’t really have any jobs associated with them. So sadly, I don’t think it will. Yeah, but I thought, but yeah, that’s a really interesting point. I also didn’t really know that about basically a lot of the data centers are using up that spare capacity. So this is, I mean, it’s interesting because we’ve talked about in this conversation kind of historical sort of comparisons or historical analogies that might be needed and, or are used and needed. And I was interested to get your take on the argument of David Edgerton. So he wrote a book called The Shock of the Old where he basically argued that—well, he tried to think about use rather than invention as the most important process in technological change. And then also he argued that lots of new technologies are wrapped up in older ones. And I wondered whether you think is that way of thinking useful at all do you think in considering AI? And AI sovereignty, like the idea that maybe older technologies are going to be as important as kind of newer ones in potential AI advancement. Hamish Low: Yeah, I think it is very helpful. I think you already kind of see this in practice, with some of the infrastructure build out in the US where you have OpenAI in Texas and the things that they desperately want more of are these natural gas turbines that no one has cared about their production for two decades and everyone thought that the production forecasts were basically going to completely die off because of renewables build outs and heating and ventilation technicians, which is two things that you would not immediately identify. Philip Bell: Yeah. Yeah and SMRs because you were talking about small modular reactors as well. The nuclear energy small modular reactor because I mean I think they were invented by the British Navy in the 1950s or something like that but they yeah they’re kind of coming back in. But yeah and that’s really interesting. Hamish Low: Yeah, and I think there are probably just lots of ways that AI interacts with older technology. I think it’s very interesting. And I think it’s very hard to reason through. I think you end up in this weird world where right now AI capabilities are just very jagged in the sense that there’s this jagged frontier of AIs are really good at coding, but they’re still really bad at lots of other things we might care about. I think there’s good reasons to just expect this to continue being true. That probably you see just lots more progress in domains of traditional white collar work. They get way, way better at coding and making PowerPoint documents and doing incredible mathematical reasoning. But probably maybe struggle on things that you think wouldn’t trip them up and struggle with this sort of transitioning into functioning in the real world and in industries in the real world. Which does lead you into this very interesting fusion of old and new technologies and how these might interact. You have cases like cyber security, where there’s interesting ways where the two of them interact, where you have really, really rapid progress potentially in the ability to do automated hacking attacks. You could just have, basically you can just spin up the most sophisticated team of hackers that you could have had in 2015 and just deploy them to attack any given target around the world. And this is really, really scary. Lots of companies around the world and governments rely on these incredibly old legacy outdated tech stacks. There’s a bunch of US government tech that runs on coding languages that were popular in the 1960s. And there’s the one guy who’s retired, they have to bring in every time it breaks. So it’s very concerning. But then maybe you could use AI to just retranslate all of these coding languages to much more modern ones and like refactor all this security and this kind of thing. So it’s like you end up with these weird ways where the two of these things interact of very old technologies, very new technologies and probably you end up with lots of parts of the economy where there’s some really old process or technology. You know, there’s some literal fax machine or incredibly outdated piece of hardware. And then there’s some AI software system running on top of it that’s trying to use it and interact with this old system and we’re just gonna live in this really weird world where the two of these things are both true. You have the super intelligence in your coffee maker or whatever, where it’s some very established technology but that is getting some benefit out of having all of this just cognitive ability plugged into it. Philip Bell: Yeah, that’s really interesting. Another, yeah. I also wonder, just as you were speaking there, I was thinking that’s a really interesting point about kind of short, you know, old programming languages, could AI help to kind of retranslate those in some way. And it made me think about the trade, the kind of the ratio between building versus maintaining. Technology because I think that’s something else that David Edgerton argues. He argues that we think about kind of building new technologies much more than kind of maintaining repairing old ones. Whereas maintaining and repairing old ones is actually more important according to his argument. But it’s an interesting—I think that’s an interesting question around in an AI future, would—could AI actually mean that there’s more maintenance and repair needed if there’s, let’s say if there’s, you know, just more complicated and larger scale systems at play. But, but yeah, I mean, the use of historical analogies, I think is really ubiquitous in talking about AI, maybe too ubiquitous. I heard Sam Altman pitch—he compared AI to fire. If you listen to Demis Hassabis or really any AI leader, they often talk—compare AI to I think the industrial revolution or other things. And I’ve heard, I mean, some people say that China according to its policy seems to consider AI more akin to electricity or the internet, whereas the US, according to its policy, and this might be a simplification, seems to compare, seems to be thinking about AI more akin to developing the atom bomb. And that kind of informs their policy in that US policymakers are very worried about takeoff scenarios, maybe more so than Chinese policymakers. But yeah, in general, do you think that using historical analogies is more useful or more distracting when thinking about AI and generally AI sovereignty? Hamish Low: I think generally they’re useful. I mean, I studied history, so I’m contractually obliged to give this answer. But I do think it’s very helpful. I think just from the kind of general—in forecasting, you have the outside view and the inside view, where the outside view is just ignoring the specific thing you’re trying to study and just figuring out, okay, what’s the right reference class? What’s the way we’ve understood previous incarnations of whatever this thing is? I think that’s where historical analogies are just really, really helpful—you just want a reference class you can base off of. I think this is where these kind of previous general purpose technologies are super useful. Electricity and steam are just two really good examples of previous technologies that apply across every sector of the economy. And we can draw a lot of interesting conclusions from these, kind of what Jeffrey Ding does in his work of studying in detail quantitatively how these diffuse across an economy and when the productivity effects kick in. So we can learn general purpose technologies, the productivity effects probably come two decades to three decades after their widespread diffusion. This is really good to know. But then you can do the kind of inside view, which is where you really are just interrogating the specific thing you’re interested in. And I think this is where you get to a bit more like AI is the atom bomb—there’s things about AI that are kind of crazy and actually very different. You know, that if AI can function as just a substitute for human labor, this has lots of really crazy consequences that no other technology does. And I think it’s really useful to actually interrogate these differences because maybe previous general purpose technologies took two or three decades. But maybe one of the reasons why they took so long is that there just weren’t that many good metallurgists and iron engineers and people who were able to make steam engines and so the kind of skilled engineers who actually learned how to make a steam engine and then went and worked in this completely unrelated industry and was like you should plug in a steam engine. There just wasn’t that many people. But if AI can substitute for these people then maybe these processes can go much faster. So then you can kind of more easily debate—okay if previous general purpose technologies took three decades, how fast do we think AI can speed up this diffusion process? Maybe it pushes it down to a decade, maybe it’s 15 years, maybe it’s five. And I think this is where they can be very helpful is because they just give you something to base these discussions off. Because I think otherwise you can end up in very—if you go too inside view and you go too, AI will change absolutely everything. Then you get lots of these economic models of explosive growth from AI where just you have a model of what growth is and you’re like, okay, AI is a perfect substitute now, what happens? And the answer to what happens is that you get stratospherically crazy economic growth. And I think this is helpful in some ways, but also you just have lost some of this reference class and understanding the history that maybe grounds your answer a bit more and brings you a bit closer to what I think is a better forecast of the future. Philip Bell: Do you think that it’s useful to, or do you think as policymakers, people should think about a number of different scenarios and kind of hedge their policy bets, I suppose? Or do you think it’s more useful to kind of basically decide what scenario is most likely, like let’s say 2028 takeoff or something, and kind of develop policy accordingly? Hamish Low: Yeah, I think you basically just have to go in with different scenarios and just understand the differences between them. I think ultimately states are big and they have lots of money and they can do these multiple things. I think you should be worried about the 2028 takeoff. We just have lots of uncertainty about whether this is possible or whether it could happen. And probably you want the people in the AI Security Institute really thinking this through in detail and trying to figure out what the mitigations are. And we should put some work to preparing for that world. But then equally, I think the AI as electricity, the AI as normal technology is also true that there’s just going to be lots of actually in practice very boring mundane ways that AI is diffused across the economy. You know, there’s going to be so many really dull pitches to venture capitalist investors about AI for X random thing. And lots of them will be very useful and will give us good economic growth. But they’re just not very interesting or flashy and you should just have people working on thinking about both of these worlds. And yeah, from a policy perspective, it’s tricky because sometimes you’ll end up in trade-offs between the two of these. You know, there are some things you might do to prepare for this crazy takeoff world that are unhelpful for this other general slow diffusion normal technology world. I think probably those are worth taking the trade-off of. I’m quite scared of the takeoff world. But I guess we just need to go through the process of just trying to sketch out these possible futures and just having people debate between them and figure out what we think is important and what are the best, most robust ways we can navigate this. Philip Bell: I agree. What—this is the last question that I was going to ask. What is your kind of best case scenario for kind of how AI plays out? Hamish Low: You’re leaving me on the hardest possible question, which is slightly mean, but I’ll give it a go. I think we have the first easy box of it doesn’t kill us all, which is yay, this is excellent, and it’s a pretty necessary condition. We have the second tick box, which is that it doesn’t lead to some crazy concentration of power that destroys all our democratic institutions, which I think is also a real worry. So we want safe AI that’s developed in a kind of diffuse and democratic way. But then I think if you’re able to start satisfying these conditions, it just can be really, really good. I think there’s big issues to navigate in terms of a lot of people in this space are not very honest about how far AI is just an automating technology. This is all it does is it automates people. Broadly, this is its main economic effect and any vision of the future where you’re like it will create new jobs—I mean, yes, but these will be a fraction of the jobs that it’s just taken away. But I think even still we can envision very positive scenarios because we just have a much richer world here. We just have many more resources from all the things that AI can do for us. And I think we just get incredible medical goods. You just get to have your own personal AI advisor. There’s been lots of interesting work around ways that AI can augment democracy, of actually making some maybe really interesting direct democracy systems possible because I can tell my AI in much richer detail what my political preferences are. And then AIs themselves can aggregate these much more easily in a way that electoral systems are just not very good aggregating functions for people’s democratic preferences. So I think ultimately the kind of flourishing positive world is one where everyone is richer, everyone has much better healthcare, everyone has better governance, and people can just live freer, happier, richer lives. And I think this is hopefully the goal that we can work towards. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit genfutures.substack.com

Audio