By Ted Nordhaus and Adam Stein
Earlier this month, Alex Trembath and I wrote about why the Three Mile Island restart tells us a lot about the significant role that nuclear energy will likely need to play in a fully decarbonized electricity system. In this follow up post, Adam Stein, Director of Nuclear Energy Innovation at the Breakthrough Institute, and I look at what the recent announcements that they plan to procure small advanced reactors to power AI data centers by Google and Amazon tells us about what the future of nuclear energy in the United States is likely to look like.Â
If last month’s announcement by Microsoft and Constellation Energy that they planned to restart Three Mile Island was a potent symbol of nuclear energy’s changing fortunes and importance to efforts to decarbonize the US electricity system, this month’s announcements by Google and Amazon likely tell us a lot more about where the US nuclear sector is heading. It is one thing to reopen a recently shuttered nuclear plant like Three Mile Island, quite another to build new reactors. Revitalizing the nuclear sector, such that it might play a major role in meeting US climate ambitions, will require building several hundred new reactors, 200GW worth by 2050 according to Energy Secretary Jennifer Granholm and other Biden administration officials.Â
Over the last decade, there has been a sometimes quiet, sometimes open debate within the nuclear sector about whether the future of the technology would look much like its past. Would we see predominantly large conventional light water reactors built and operated by regulated monopoly utilities, or does successfully rebooting the sector require different technologies and business models better suited to the range of use cases where nuclear might play a significant role and the changing realities of the US utility sector. Most nuclear advocates have hedged their bets a bit, suggesting that there was room for both large and small reactors in a revitalized nuclear future. But many have also made fairly strong claims about which path was most promising and deserving of prioritization and resources.
Over the last year, the collapse of NuScale’s first-of-a-kind project to deploy small modular reactors at a site in Idaho and the completion of the two large AP1000 reactors at the Vogtle plant in Georgia have swung opinion significantly back toward the old model. Economies of scale and proven, licensed designs, in this view, are the coin of the realm. The problem that bedeviled the sector in the past was the failure to standardize a single design and build in multiples, like France and South Korea did. We have a brand new, state-of-the-art light water reactor that’s licensed and ready to go, so let’s get started.
But the Google and Amazon deals suggest that conclusion is, at the very least, premature. Both tech giants have announced deals to be early customers for small advanced reactors to help meet their commitments to power their datacenters with 24/7 clean electricity. Google has committed to purchase 500 MW of Kairos Energy’s salt-cooled fast reactor, with the first slated to come on line in 2030 and the balance by 2035. Amazon and other investors committed to purchase electricity from four gas-cooled reactors from X-Energy in Washington state, as well as investing $500M into X-Energy itself to support its commercialization efforts. Both firms could, in theory, have cut deals to buy electricity from new AP1000s. But both instead bet that they can get these small advanced reactors to market and scale them faster, at a cost that they are willing to bear. X-Energy also has a project with Dow to provide electricity and high-temperture steam deployed at a chemical plant—an application that large conventional reactors are not suited for.
Although Kairos now has a construction permit for its Hermes demonstration reactor and will soon have one for the follow-up Hermes 2 demonstration, both Kairos and X-Energy have yet to deploy their first reactors, as is true of every other advanced reactor developer. But these deals are also tangibly different from the MOUs and similar announcements with potential customers that advanced reactor developers have regularly touted for years. Google and Amazon have made contractual commitments to buy electricity at a fixed price and Kairos and X-Energy have made commitments on the other side of those contracts to deliver electricity at that price. There are various milestones and offramps available to Google and Amazon if Kairos and X-Energy fail to hit the marks they have promised along the way. But Google and Amazon have now both made pretty significant bets that Kairos and X-Energy will be able to deliver advanced reactors at the promised price point and timeframe and those bets, in turn, are likely to increase confidence from both investors and other customers that they will do so.Â
A number of energy analysts have questioned why firms like Google and Amazon have chosen to go the small reactor route, pointing to analyses that suggest that large conventional plants like the AP1000 are cheaper and raising concern that the sector is repeating mistakes from the past by failing to commit to a single design to deploy in multiples. But the determination by Google and Amazon to bet on small advanced reactors shows both that price isn’t everything when it comes to electricity markets, most especially when the firms on the other side of those transactions are technology firms seeking clean generation for power-hungry data centers, and that the notion that large reactors will necessarily be cheaper than small reactors depend upon assumptions about the cost of first-of-a-kind reactors, technological learning rates, and the regulatory paradigm that are all no more proven than the advanced reactors that Google and Amazon are counting on.Â
As we’ll see, there are good reasons that these firms have chosen to pursue advanced reactors to meet their net-zero commitments, including the availability of sites to deploy them, the realities of financing gigawatt-scale reactors, the economics of data centers, and the enormous obstacles to deploying large reactors in fully or partially liberalized electricity markets. This is true even if small advanced reactors do prove to be more costly than large reactors, which, as we’ll also see, is not necessarily true.Â
Where Can We Build Them?
Siting a large light water reactor is no small task. Because they have large amounts of radioactive fuel and operate at high pressure, they require large exclusion zones around them, to provide a buffer between reactors and the surrounding population. They need water, lots of it, to cool the reactors. And they require substantial transmission infrastructure, capable of connecting multiple gigawatts of electricity to the grid.Â
In theory, there are lots of places with ample land and water to do so and building transmission to connect new reactors to the grid is not rocket science. But in practice, greenfield siting of new large reactors is something that almost no one has any enthusiasm for. Doing so requires navigating a myriad of local, state, and federal permitting challenges along with both securing NRC approval and permitting, securing right-of-ways, and building sufficient new transmission to connect to the grid.Â
For these reasons, virtually all efforts to site new reactors, large and small, have focused on brownfield sites where there is already existing energy infrastructure, including transmission. These have primarily been either existing or retired nuclear or coal sites, as well as national laboratories, such as the Idaho and Oak Ridge National Laboratories, that already have substantial nuclear infrastructure. Thankfully, there are a lot of coal and nuclear sites around the United States that could, in theory, host new reactors. The Department of Energy’s Office of Nuclear Energy recently released a report purporting to show that there are 54 existing or retired nuclear sites around the country that could host 60 GW of new AP1000 reactors and an additional 115 current or retired coal plants that could host as much as an additional 128GW of new AP1000 reactors The Idaho National Laboratory similarly has produced more stringent analysis suggesting 19 sites around the country that might be suitable for large reactors in the near term. But three further constraints are likely to limit how many new large reactors can feasibly be deployed in the coming decades.
First, unless there are already AP1000 reactors on the site, as is the case at the Vogtle site in Georgia, no one is going to build a single large reactor. Single-unit sites simply cost more than multi-unit sites and won’t reach the low-cost estimates that have created the recent wave of enthusiasm for large LWRs. Building in multi-unit sites allows reactors to share construction infrastructure, skilled labor and project management teams, reduced operating costs, and to share other infrastructure such as backup equipment and security over the course of their operating lifetime. DOE’s recent liftoff report acknowledges that multi-unit sites are a third cheaper than single-units. Research also shows the single-unit sites have higher market risk. Building in multiples, not just standardized designs, is one of the key findings from the literature on how nations such as France, China, and South Korea have succeeded in keeping nuclear construction costs under control. Westinghouse and utilities had similar observations—the planned initial wave of AP1000s in the early 2000’s were all dual reactor sites.
Once you account for existing coal and nuclear sites where it is feasible to build at least two large reactors, the number of sites where new large reactors might be deployed is substantially smaller. Of the 18 sites that INL identifies as suitable for near-term deployment of AP1000 reactors, only 10 could take multiple reactors. DOE’s list is harder to parse and does not use the same extensive criteria to evaluate site readiness as INL. But it is clear that many of the existing and retired sites identified by DOE could not accommodate more than one new AP1000 from the basic math in DOE’s summary, which concludes that the 54 sites it identifies that are capable of hosting a new AP1000 could host a total of 60GW of new capacity, or one AP1000 per site. The same holds true of DOE’s coal site analysis. 115 current or retired coal plant sites capable of hosting 128GW of new capacity amounts to an average of exactly one AP1000 per site (although DOE’s analysis is so obtuse and back of the envelope that any estimate of feasible sites based upon it, particularly for coal sites, should be taken with a grain of salt. There are clearly at least a few opportunities for multi-site large LWR deployment at current or retired coal sites).Â
Beyond the physical limitations of the site to handle two large reactors, there is the question of whether the grid can handle over two gigawatts of additional generation. A number of sites listed by DOE, particularly coal power plant sites, are in the service areas of relatively small utilities. Rapid projected growth in electricity demand is not geographically homogenous. You can’t simply plop two gigawatts of generation down in a place that has neither the transmission infrastructure to send it to places with the local demand and capability to use that electricity.Â
Finally, there is the reality that a lot of the sites identified by DOE and INL as being suitable to deploy large reactors are located in electricity markets that have been partially or fully liberalized, meaning that there is no vertically integrated monopoly utility to deploy them nor a cost of service regulatory framework that would allow a utility to build the $20 to $30 billion those reactors would cost into the rate base over the course of 30 years or longer. There is simply no mechanism in liberalized electricity markets to make investment commitments of capital over the timescales that are necessary to pay off a large reactor. Indeed, the next new nuclear reactor deployed in a liberalized electricity market anywhere in the world will be the first nuclear reactor deployed in a liberalized electricity market. It literally has never happened.Â
Depending upon one's definition, somewhere in the neighborhood of two-thirds of the US population and electricity demand is located in electricity markets where monopoly utilities do not build new generation or can’t rate base it and there is no evidence to date, even with the coming boom in demand from data centers for firm low carbon generation, that anyone is seriously considering building AP1000s as a merchant power generator. So once you account for sites that can handle multiple large reactors, utilities with enough demand or interconnection to take them on, and markets where utilities can rate base capital costs necessary to build large reactors, the number of sites where the conditions are favorable to deploy a large reactor between now and 2050 is down to a handful.Â
Of the 18 sites identified by INL as suitable for near-term deployment of AP1000s, only 4 can both accommodate more than one AP1000 and are located in a cost-of-service market, meaning that near-term potential for deployment of AP1000s at existing or retired nuclear sites over the next decade or two is, at most, perhaps 10GW. The real potential of sites evaluated by DOE is likely around the same. The bar is likely higher still for deployment of AP1000s at coal sites, as these sites have never been evaluated, much less permitted, for nuclear of any sort, many do not have sufficient land, water, or transmission infrastructure to accommodate two large reactors, and many are situated in liberalized electricity markets.Â
Absent some very substantial change in the structure of electricity markets and the rules around siting both nuclear reactors and transmission infrastructure, the lion’s share of new generation involved in any successful effort to deploy anything close to the Administration’s target of 200GW of new nuclear generation between now and 2050 will almost certainly be produced by small reactors.Â
Why Google and Amazon Are Betting on More Expensive Small Reactors
Many energy experts over the last decade have concluded LCOE, the levelized cost of electricity, is a flawed, and often misleading metric, when comparing the cost of variable sources of renewable energy with a firm source of electricity generation such as nuclear energy. LCOE estimates the cost of producing electricity from a single plant, basically the amount it costs to build and operate a nuclear reactor or a solar farm or a natural gas plant divided by the total amount of electricity it will produce over the course of an assumed lifetime. LCOE is a rough estimate based on uncertain assumptions. It ignores changing market prices, the ability of a source to take advantage of those fluctuations, and much more. Perhaps most importantly, it doesn’t accurately reflect the true cost of electricity from a source to the owner of a portfolio of generators or the impact on overall system costs.
For these reasons, the absolute price of a given source of electricity generation is not an accurate representation of its value to the grid and end users. The ability of nuclear to provide on demand, 24/7 electricity year round reduces the total system costs of grids with substantial shares of nuclear, which is what end users actually pay for electricity. Hence paying a premium for a more costly generation source such as a nuclear power plant can often result in lower total electricity prices for end users. This shouldn’t be surprising to anyone familiar with how electricity grids have operated for decades. Utilities have generally sought to build and operate a portfolio of energy sources providing a variety of operational characteristics at a range of prices.
The difference between the cost of electricity and its value is worth keeping in mind when considering the choices that Google and Amazon have made. The economics of AI data centers mean that demand from tech firms and hyperscalers for firm, always available electricity is highly price inelastic. Even costly electricity is a very small input cost for data centers that can cost as much as $35 billion to build, half of which is the cost of microchips that must be replaced every 5 to 7 years. Having a dedicated electricity supply to keep that incredibly expensive capital investment operating at maximum capacity all the time is far more important to these firms than how much the electricity costs. So important that most data centers invest in onsite backup generation, typically from a series of large diesel generators. Â
For high profile firms like Google, Microsoft, and Amazon that have made public net zero commitments, the same applies to firm, low carbon generation like nuclear. These firms are willing to pay upwards of $150/MWh for nuclear and other low-carbon sources that can provide large volumes of always-on electricity for data centers. That’s well above the LCOE of wind or solar generation. But if you are determined to power your data centers entirely with hourly-matched low-carbon electricity, wind and solar are not an option. So the choice for these firms is wind and solar with enormous additional and costly battery storage, gas with carbon capture, geothermal, or nuclear. Tech firms committed to low-carbon generation are pursuing all of those things, typically betting on a portfolio of low-carbon options that are all, at present, costly, unproven, or both. Google, in particular, has made it a priority in their decision-making to accelerate the next generation of advanced clean energy technologies instead of just building already commercialized ones.
For the nuclear part of that portfolio, the key considerations driving Amazon and Google are not the comparative LCOE of large conventional reactors versus small advanced reactors. Even accepting analyses that suggest that small reactors are more expensive (depending on which analysis you look at the LCOE for small advanced reactors can be higher higher than for a large conventional reactor), LCOE is not the primary concern for tech firms and data centers.Â
For Google and Amazon, the key questions revolve around the cost of entry to the nuclear procurement game, how quickly they can get new reactors deployed, and whether the technologies they are betting on can scale rapidly to meet the burgeoning electricity needs of AI data centers if those bets pay off. By all of these considerations, small advanced reactors make far more sense. Both firms have made initial commitments to purchase roughly 500MW of advanced nuclear capacity. Doing so allows them to make commitments not just to purchase a single reactor but multiple reactors from each firm respectively, with the expectation that the cost will decline as those reactors are deployed. Both firms have committed to getting initial builds online by the early 2030s. If they can do so at the promised price point, there are hundreds of nuclear and coal sites around the country where they can be deployed in the coming decades. The price of those new reactor builds may not be competitive with the cost of wind, solar, and natural gas on wholesale electricity markets. But that is not a major problem if you anticipate building tens or hundreds of gigawatts of AI data centers over the next two or three decades and want to power them with low-carbon electricity.
By contrast, building a new AP1000, practically, would require building two, at a cost of somewhere in the neighborhood of $9000-12,000/kW or $20 or $25 billion just in capital expense. It would require either approval by a state public utility commission for a major monopoly utility in a regulated, cost-of-service electricity market to rate base the cost of building it or a consortium of firms like Google and Amazon to commit to share the cost and the risk associated with a project in a liberalized electricity market. A project to build new AP1000s announced today might be ready to generate electricity by 2035 and, after it was done, firms like Google and Amazon would have a limited number of sites around the country where they might do it again. Google, Microsoft, and Nucor announced plans for a consortium to pool demand and share risk earlier this year. But so far, they have announced separate plans to move forward.Â
As in the case of small reactors, the very significant premium that tech firms are willing to pay for firm low carbon generation makes the possibility of building new large reactors in liberalized electricity markets a lot more feasible than it was a few years ago. But without a regulated utility to rate base the risk, it is likely that a substantial commitment from the federal government to share in the cost and risk with developers would likely be necessary. Indeed, even in the regulated cost-of-service markets, there does not appear to be a willingness from either utilities or state regulators to take on the risk of a large reactor without a substantial federal backstop. So in the absence of a workable consortium, a utility or other builder, or a federal backstop, firms like Google and Amazon have quite sensibly gone the small reactor route.Â
Are Small Advanced Reactors Really More Expensive?
There is a substantial cohort within the nuclear sector that believes that the economies of scale associated with large nuclear reactors cannot be beaten by the cost benefits of modularity, manufacturing, faster learning rates, simplified designs, and inherent safety features that smaller advanced reactors promise. But these arguments are largely based upon the economics of simply building smaller versions of large light water reactors and take the current regulatory paradigm for granted. They assume that smaller, non-light water reactors, which require far less radioactive material, operate at ambient atmospheric pressure, and often feature fuels that either can’t meltdown or are far more resistant to doing so, will be subject to the same deterministic safety requirements that currently apply to large light water reactors, and hence will not be allowed to take full advantage of their safety characteristics.Â
The most recent experience that we have of the cost difference between large and small reactors comes from two AP1000 reactors deployed in Georgia and the canceled UAMPS NuScale SMR project. The UAMPS project was canceled after re-baselined costs produced as part of NuScale’s cost-sharing agreement with the Department of Energy found that the estimated cost of the project had doubled, from roughly $5 billion to $9 billion, for the 426MW project. At that cost, LCOE of the project would have been about 25% higher than the cost of power from the Vogtle project.Notably, the LCOE for both projects, while well above the cost of either variable generation from wind and solar or unmitigated generation from natural gas plants, is well below the price point that firms like Google and Amazon are willing to pay for clean firm generation.
The cost comparison between the Vogtle and NuScale projects is also not necessarily an accurate representation of the difference in cost between a large light water reactor and a small one. Vogtle is not a true first-of-a-kind AP1000 plant. The first AP1000 was actually deployed in China in 2018. Moreover, the primary reason that the construction cost of the NuScale project escalated so much was the huge run-up of commodity prices in the post-pandemic period. Had construction of the first-of-a-kind Vogtle plants been planned for the same time period as the UAMPS project, the Vogtle plants would have cost substantially more as well.Â
NuScale is also not a very good reference for the technologies that Google and Amazon are buying. It is, in most respects, a shrunk down version of a large light water reactor. Like the AP1000, it has incorporated a range of passive safety features, such as convective cooling, that make it safer and less dependent on mechanical systems than older conventional light water designs. But the result is that it is actually more material intensive than large light water reactors. Even so, it's not clear that an apples-to-apples comparison would yield a significant cost difference, once things like commodity prices are normalized. Insofar as economies of scale benefit large light water reactors like the AP1000, technological learning for SMRs like NuScale might easily achieve lower costs over the long term.Â
As noted above, we have a reasonable starting point for comparing the cost of the NuScale project with the Vogtle project because, although NuScale hadn’t begun construction at the time of its cancellation, it did have to make its updated cost estimates public, audited by the Department of Energy, because of the public cost share in the project. By contrast, efforts to compare the cost of large reactors like the AP1000 to small non-light water reactors are far more speculative because nobody has built the latter and advanced nuclear developers have generally been unwilling to share much detail about projected costs.Â
One interesting recent study out of Colorado State University suggests that first-of-a-kind small non-light-water reactors are likely to cost less than large light-water reactors. Their analysis conducts a detailed bottom-up engineering analysis, using published design specifications for a General Atomics high-temperature gas reactor and an Integral Molten Salt Reactor and finds that these two small non-light water designs had LCOE roughly 7% lower than either large or small light water reactors. Of note, they compare the cost of first-of-a-kind small modular reactors to the average cost of large pressurized reactors built in the 1980s that had no significant cost overruns. So insofar as this estimate fails to account for the likelihood that first-of-a-kind non-light water builds will cost substantially more than detailed engineering estimates suggest, they are also comparing those builds to nth-of-a-kind mature reactor builds that did not experience significant cost overruns and still conclude that they are likely to cost less.Â
A recent analysis of all available advanced nuclear cost estimates shows that the cost ranges for large and small modular reactors is essentially the same, with large reactors 1% more expensive at the top of the distribution and 8% less expensive on the low end. There was no statistically significant differnce in the dataset between large reactors and non-light-water advanced reactors. Operating costs are expected to be lower despite having higher fuel costs. The report, which also provides the basis for the widely used ATB database of energy costs, cautions against inferring that SMRs are less competitive than large reactors.Â
The other concern that some energy experts and nuclear advocates have raised is that Google and Amazon may be unwittingly repeating the mistakes that the US nuclear industry made in the 1970s, failing to settle on a single standardized design, and hence losing out on the ability to bring cost down by learning how to build the same design multiple times. But this concern is misplaced when applied to the prospect that multiple firms might commercialize different small reactor designs. The problem with the failure to standardize a single large reactor design is inherent to the economics of large reactors. At one gigawatt per reactor, you don’t get to build that many. Once you move to multiple commercialized large reactor designs, the opportunity for standardization and technological learning becomes very limited.Â
The same constraints do not apply to small reactors. Smaller advanced reactors are expected to have higher learning rates, and therefore faster cost reductions than large LWRs. Google alone, with a 500MW initial commitment, one quarter what would be necessary to support a new multi-unit AP1000 project, will buy power from at least four Kairos reactors. Amazon has committed to buy approximately 350MW from two X-Energy reactors with a follow-on commitment of 5GW, which would ensure an order book for somewhere in the neighborhood of 25 additional reactors. Notably, both designs utilize Triso fuel, simplifying what is arguably the most complicated supply chain issue.Â
So while Google and Amazon have not settled on a single design to bring to market, they don’t need to in order to assure that the advanced reactors they are betting on will have standardized designs that their developers will build in multiples. And while small advanced reactors won’t benefit from the same economies of scale that benefit large reactors, there are a range of other pathways they can take advantage of to assure lower cost and technological learning, including simpler designs and much better and faster economies of multiples.Â
In all of this, the future, of course, remains unwritten. A lot can go wrong well after detailed engineering-level estimates are completed and we won’t know what any of these reactors cost until we’ve built a bunch of them. Moreover, the economic viability and cost trajectory of advanced non-light water reactors faces substantially greater regulatory uncertainty. It remains entirely unclear, for instance, to what degree the Nuclear Regulatory Commission will allow advanced reactor developers to license designs that take full advantage of their safety characteristics. It doesn’t much matter that your reactor can’t meltdown and operates at ambient atmospheric pressure if the NRC still requires additional safety measures that are not applicable to the technology.Â
Nor is there any clear picture of how the NRC will regulate the manufacturing of small reactors or whether it will reform certification requirements for reactor components, even those that are not critical to reactor safety. The cost-reducing benefits of manufacturing and modularity will be difficult to realize if manufacturers have limited ability to process innovation or manage supply chains without having to undertake license amendments that can drag on for months or years. It is for this reason that we have been far more aggressive around questions of regulatory reform and leadership at the NRC than most other nuclear advocates—because the imperative for regulatory reform is far greater if there is going to be a future for small non-light water reactors than if the future of the industry is going to look much like its past.Â
With a reformed nuclear regulator, having a diverse portfolio of developers and designs in the early stages of commercializing advanced reactors is as likely to be a strength as a weakness and, given the fragmented nature of the US electricity system, the predominance of liberalized electricity markets, and the limited number of available sites that could handle multiple new large reactors, seems no less plausible than more government-involved schemes to deploy a single standardized class of large reactors across the United States. The unique requirements and economics of AI data centers is a game changing gift to advanced nuclear energy. Nuclear advocates would be unwise to look it too closely in the mouth.
Only LOW COST nuclear power will impact our future.
Article has good insights into the practicality of dual AP1000 nuclear power plants at existing plant sites but not much optimism on cost. I'm a co-founder of Thorcon, developing molten salt reactor power plants for Indonesia and SE Asia markets. The rich tech firms don't have the same incentive we do -- energy cheaper than from coal. This essay explains the importance of LOW COST CO2-emission-free, 24x7 power and describes our design choices to generate electricity cheaper than from burning coal or LNG. https://mailchi.mp/86906e15dcd6/only-low-cost-nuclear-power-will-impact-our-future?e=0bd4c20197 Visit thorconpower.com for more.
This is an interesting article, but if the goal is fast construction of cheap, low-carbon electricity generation in North America, Combined Cycle Gas Turbines are by far the best option.
https://frompovertytoprogress.substack.com/p/the-wonders-of-ccgt