The NRC’s Problem Isn’t Caution. It’s Architecture.
What Will it Take for the NRC to Meet Its Statutory Obligations?
By Adam Stein

Nuclear regulation is often discussed as a problem of temperament. Critics argue that the Nuclear Regulatory Commission is excessively cautious. Defenders reply that caution is precisely the point. Both sides miss what actually matters.
The NRC’s difficulties are not about how much risk the agency tolerates. They are about how the commission is structured and how decisions are made. The Commission is operating with an institutional architecture designed to optimize a single objective—safety—while being asked to govern a system where multiple objectives now matter at the same time.
This is why debates over whether the NRC is “too strict” or “too slow” rarely go anywhere. From the outside, the agency can appear either prudent or obstructive, depending on one’s pre-existing beliefs. From the inside, however, a different picture emerges: a regulator that has become increasingly effective at refining its tools, while steadily losing the ability to integrate those tools into decisions that serve its full statutory mission to safely regulate nuclear power in the United States in such a way as to promote the fullest benefit to society.
What makes this hard to see is that the failure mode is not obvious. There is no single bad rule, no dramatic lapse in safety, no clear villain. The problems only become visible when you trace how decisions have been made over time, how metrics have hardened into anchors, and how disagreement accumulates into delay.
Those patterns are not unique to nuclear regulation. They recur across modern governance wherever high stakes, deep uncertainty, and asymmetric accountability collide. Nuclear power simply provides an unusually clear case study.
A System Optimized to Deadlock
The NRC’s internal dysfunction is measurable. Since roughly 2016, Commission voting timelines have lengthened dramatically, variance has increased, and procedural timeliness goals are now met in only a small minority of cases. These delays are driven primarily by prolonged vote completion and incompatible decision criteria at the Commission level.
This is not a failure of diligence or expertise. It is a coordination failure—a situation where reasonable individual decisions cannot be aligned into a coherent outcome. When institutions face multiple legitimate goals but lack a way to weigh them explicitly, delay becomes the default. Once safety goals hardened into numerical anchors and benefits fell outside the formal decision frame, disagreement no longer took the form of substantive debate—it took the form of delay.
The Commission’s structure gives each member substantial power to slow or block outcomes, even when staff analysis is complete and other commissioners are prepared to move forward.
Empirical analysis of Commission voting timelines shows that smaller commissions can be less efficient, not more. The commission is composed of five members, with a minimum of three for a quorum. In a three‑member configuration, a single commissioner can withhold a vote indefinitely, stopping the agency’s decision-making process without ever casting a dissent. Delay, in these cases, is not the product of collective disagreement but of unilateral veto power embedded in process design.
Institutional architecture shapes incentives. When withholding a vote carries little reputational or procedural cost, delay becomes a low‑risk way to signal caution, express disagreement, or advance external priorities. While the NRC is formally independent, commissioners are political appointees, and partisan behavior does occasionally surface. The existing framework does little to discipline or channel these dynamics productively.
The point is not to assign blame to individuals, but to recognize that the Commission’s design amplifies individual behavior rather than constraining it. Commissioners are part of the problem—but they are not the problem.
The Metric Trap, Applied to Safety
For most of its history, the NRC avoided coordination problems by implicitly treating safety as the only admissible objective. That choice was understandable. Safety is measurable. It is defensible. It produces clear proxies: dose limits, design‑basis accidents, probabilistic risk thresholds.
But proxies have a dark side. When a metric becomes a target, it ceases to be a good measure of what we actually care about. By optimizing narrowly for safety and estimated risk reduction, the NRC disconnected its regulatory oversight from broader welfare outcomes. In exchange for exquisitely safe nuclear systems, the NRC produced an economically brittle, legally unpredictable, and socially contested regulatory environment for American nuclear power.
This is not unique to nuclear regulation. It is the same pathology that produces hospitals optimized for bed utilization with no surge capacity, supply chains optimized for efficiency with no resilience, and infrastructure projects optimized for compliance rather than completion.
For decades, the NRC became extraordinarily good at answering one question—“Is this safe enough?”—and institutionally incapable of answering another—“Does this regulatory choice serve the general welfare better than the alternatives?”
Faced with this dysfunction, the instinct is to reach for better tools: more granular risk models, more sophisticated probabilistic analysis, more precise metrics. That instinct is wrong.
The problem is not that the NRC’s models are insufficiently detailed. It is that the process of regulating nuclear power and decisions of the Commission cannot be reduced to a single dimension without smuggling in unacknowledged value judgments. No model can tell you how to weigh marginal risk reductions for already-safe systems against predictability, cost, climate benefits, or institutional credibility. Those are normative trade‑offs. Pretending they are purely technical only drives them underground, where they re‑emerge as delay, conflict, and procedural breakdown.
This is why nuclear waste policy has remained stuck for decades. It is not an unsolved engineering problem; it is a wicked problem—one where values, institutions, and time horizons are inseparable, and where attempts at technical finality reliably provoke resistance rather than resolution.
The same structure applies to advanced reactor licensing, environmental review, and regulatory modernization.
Why Expertise Didn’t Self-Correct
At this point, a natural question arises: if these distortions were as real and consequential as they appear, why didn’t the system correct itself?
The answer is not ignorance or bad faith. It is how expertise functions inside highly constrained institutions.
The NRC is staffed by some of the most technically sophisticated safety analysts in the world. Their models are generally rigorous and internally consistent. Over time, however, those models became substitutes for judgment rather than inputs to it. Confidence in analytical outputs grew faster than the evidentiary basis for applying them at the margins.
This is a familiar failure mode in complex systems. Experts excel at explaining how systems work, but are less reliable at judging how much confidence their models deserve when feedback is slow and counterfactuals are invisible. In nuclear regulation, the professional cost of excessive caution is low and defensible, while the cost of excessive conservatism is diffuse and largely externalized.
The result is disciplined overconfidence: increasingly precise answers to increasingly ill‑posed questions. Expertise did not fail the NRC; it did exactly what the institution rewarded it for doing.
How the Pieces Fit Together
Seen in isolation, these failures look manageable. Taken together, they explain why the NRC has become stuck.
The commissioners were deadlocking because they were optimizing for different objectives using incompatible frameworks. But there was a deeper problem.
For seventy years, the NRC read its statutory mandate narrowly. The Atomic Energy Act of 1954 directed the agency to ensure that atomic energy would make the maximum contribution to the general welfare. In practice, however, the Commission interpreted “general welfare” almost exclusively as preventing accidents. Because the NRC could not “promote” nuclear energy, it reasoned, it could not systematically consider the benefits of the technologies it regulated.
This was not irrational. It was path‑dependent institutional behavior. The agency focused on preventing the last disaster (the survivor’s trap). It anchored on early framings developed under severe informational constraints. It optimized subsystems for measurable safety proxies while ignoring system‑level outcomes. Over time, those choices reinforced one another.
The result was an agency that optimized well for a single objective, while the broader mission quietly atrophied. When metrics become targets, they cease to be good metrics. After decades of refinement, the NRC had perfected a framework that could answer whether a nuclear reactor was safer than before, but not whether it advanced or hindered the benefits to society of that reactor.
Correcting the Mandate
The Atomic Energy Act of 1954 directs the Commission to ensure that civilian nuclear activities make the maximum contribution to the general welfare. For decades, however, the NRC treated this language as effectively synonymous with accident prevention. Benefits to society were not rejected explicitly at first; they were excluded implicitly, on the theory that considering benefits would constitute impermissible promotion.
The consequences of this narrow reading are measurable and material. Commission voting timelines revealed growing regulatory entropy. Delay and unpredictability translated into foregone clean‑energy deployment, higher system costs exposed to other fuel cost risks, and lost climate and reliability benefits.
The ADVANCE Act of 2024 resolved this problem directly. Congress made explicit that the NRC must consider benefits to society from nuclear energy and regulatory efficiency in its major decisions.
But after the ADVANCE Act, the shift was not automatic. The NRC’s Office of General Counsel argued that the Commission still lacked authority to consider benefits in its regulatory decisions. That position reflected how deeply the exclusion of benefits to society had become ingrained. It ultimately required an explicit Commission vote to revise the agency’s mission statement to include benefits to society and regulatory efficiency—underscoring that even clear statutory direction was not, by itself, sufficient to dislodge decades of institutional anchoring.
Updating the NRC’s mission to explicitly include benefits and efficiency was a critical step to reforming the Commission and the single most important reform available to policymakers. The updated mission has not immediately sped licensing or settled individual disputes, but it has begun a reckoning with the agency’s underlying decision architecture.
That reckoning is the pivot point for what follows. Making benefits to society and regulatory efficiency part of the mission does not resolve trade-offs; it exposes them. It shifts the problem from one of legal interpretation to one of institutional capability—raising the harder question of how an agency built to optimize a single objective can actually govern multiple legitimate objectives at once.
Closing the Mandate–Capacity Gap
Legislative correction, however, does not automatically produce operational capacity. Institutions can be directed to do new things long before they are equipped to do them.
For most of its history, the NRC built analytical processes, training, and decision norms around a single objective: safety as defined by compliance with measurable proxies. Introducing an explicit requirement to consider benefits to society changed the objective structure of decisions without changing the machinery used to make them.
This created a mandate–capacity gap. The instruction to consider benefits is clear, but the agency lacks a systematic process for weighing benefits against risks, efficiency, predictability, and other legitimate objectives in real regulatory decisions. In such conditions, benefits tend to be acknowledged rhetorically but excluded operationally.
The difficulty is structural. Benefits to society are intangible, value‑laden, and context‑dependent. A range of views is important (enabled by a commission structure). They cannot be treated as another quantitative input alongside dose or probability without distorting their meaning. Without new decision architecture, adding benefits increases analytical burden without improving decisions.
Addressing the mandate–capacity gap requires a change in decision architecture, not a change in standards or personnel.
The core structural insight is to separate non‑negotiable thresholds from compensatory trade‑offs. Safety, security, and legal compliance function as gates: options that fail to meet them do not proceed. Once those thresholds are satisfied, remaining options can be evaluated across multiple objectives, including benefits to society, efficiency, predictability, and long‑term system impacts.
This structure preserves safety while making trade‑offs explicit rather than implicit. It prevents false safety‑versus‑benefits framing by ensuring that safety is never something to be optimized away. At the same time, it creates a disciplined way to compare options that all meet safety requirements but differ meaningfully in their broader consequences.
Such frameworks are designed for conditions of deep uncertainty and legitimate disagreement. They do not dictate values or produce single “correct” answers. Instead, they surface where disagreements lie, clarify how different value weightings affect outcomes, and allow institutions to learn across decisions rather than relitigating first principles each time.
The Real Reform Question
The meaningful question facing the NRC after the ADVANCE Act is not whether it should consider benefits to society. Congress has already answered that. The question is whether the agency will build decision architecture capable of doing so without collapsing into gridlock or performative box‑checking.
Simply adding “benefits to society” as another column in a staff analysis will fail. Treating those benefits as something to be optimized directly against safety will fail. Asking commissioners to intuitively balance incommensurate objectives without structure will fail—as it already has.
What is required instead is an architectural shift: separating non‑negotiable thresholds—safety, security, legal compliance—from compensatory trade‑offs, and then evaluating policy options explicitly across multiple objectives, with transparency about where commissioners legitimately disagree.
Multi‑criteria decision frameworks, long used in other high‑consequence public sectors, are designed for exactly this class of problem: deep uncertainty, multiple objectives, and durable disagreement. Properly implemented, they do not dictate values. They make value judgments explicit, auditable, and comparable—something the NRC currently lacks.
Why This Moment Is Different
For decades, the NRC could avoid this reckoning by interpreting its mandate narrowly and letting safety subsume everything else. That era is over. The law has changed. The mission statement has changed. What has not yet changed is the machinery of decision‑making.
If the NRC responds by tightening metrics, refining models, or adding procedural layers without changing architecture, it will remain stuck—only now with statutory noncompliance layered on top of institutional paralysis.
If it instead treats the ADVANCE Act as what it actually is—a forced confrontation with seventy years of single‑objective optimization—the NRC has a chance to become something rare: a regulator that is both uncompromising on safety and competent at governing trade‑offs in a complex world.
That outcome will not come from better technology. It will come from better decision design. But new architectures must run alongside existing hierarchies during transition.
The stakes extend well beyond the NRC itself. As countries around the world look to deploy advanced nuclear technologies, many are not building regulatory systems from scratch. They are looking to import a ready-made paradigm—often modeled, explicitly or implicitly, on the U.S. approach. If the NRC exports a framework optimized for internal consistency but incapable of integrating safety, benefits, and efficiency, those pathologies will propagate internationally.
Fixing the architecture now matters not because the NRC has failed at safety, but because it has succeeded so thoroughly at one objective that it lost sight of the system it was meant to serve.

