The Multinational Monitor

MAY 1986 - VOLUME 7 - NUMBER 9


T H E   N E C L E A R   Q U A G M I R E

Unmasking the Myth Brokers

The Real Cost of Nuclear Power

by Charles Komanoff

The economic fortunes of nuclear power in the United States have steadily worsened over the past ten years, going from promising, to marginal, to calamitous. Out of the new generation of U.S. nuclear power plants-the fifty odd reactors finished, cancelled, or still under construction after 1982-no more than a handful will produce economical electricity. The cost for the remaining plants may reach $100 billion, and utility investors, customers, and regulators are already warring over who will pay.

This massive waste occurred in the United States, which prides itself on efficient capital allocation, because the cost-estimation process of the nuclear industry and the federal government in the 1970s and 1980s systematically confused "expectation with fact," as one observer has described it, and steadfastly denied adverse cost experience.

The skyrocketing cost of nuclear power in the United States is rooted in the astonishing increase in reactor "capital costs"--construction costs plus related financing costs. Nuclear plants being finished today are costing fifteen to twenty times as much as reactors built in the early seventies: $3 billion to build a typical thousand-megawatt reactor now, versus $150 to $200 million then. The real increase, with inflation factored out, is about six-fold.

As the price of nuclear power began rising in the seventies, predicting capital costs became central to the debate over the economics of building new plants. In this debate, the prime bench-mark for nuclear costs was coal-fired electricity-a pragmatic standard, yet a myopic one. On the other hand, coal-fired plants already produced half of the nation's power, and limits to domestic oil and gas resources made coal the only non-nuclear fuel suitable for new, conventional centralstation generating plants. Nuclear/coal comparisons were also simple and familiar to utility planners. By insisting that large, central generators were the only viable means of meeting electricity requirements, the power industry effectively ignored alternatives such as cogeneration, renewables, and improved energy efficiency. Today, after considerable advocacy and development of new analytical tools by environmental organizations, the alternatives have moved to center stage in informed electricity policy-making.

While the comparison of nuclear- and coal-generating costs hinges on capital costs, it must also subsume costs for fuel-which is nuclear's strong suit, maintenance, repairs, outages and decommissioning. It is helpful to distill assumptions about these costs into a "break even" capital-cost ratio, denoting how large a capital cost handicap nuclear plants can offset through fuel savings. Although geographic differences in coal prices means break-even ratios vary, a "national average" is useful. Depending on the analyst's view of cost factors such as the nuclear plant's capacity factor and the coal plant's fuel cost, the break-even capital cost ratio has generally stood between 1.2 and 1.4, meaning that nuclear plants could stand a 20 to 40 percent higher capital cost than coal plants and still end up equal in lifetime generating costs.

In what became a widely quoted statement, Harvard Business School Professor LC. Bupp wrote in 1978, "Systematic confusion of expectation with fact, of hope with reality, has been the most characteristic feature of the entire 30-year effort to develop nuclear power. ... The distinction between empirically supported fact and expectation was blurred from the beginning in the discussion of nuclear power economics.... [What was missing... was independent analysis of actual cost experience."

Bupp was decrying the failure of nuclear promoters to reconcile their estimates of future plant costs with empirical data, or even to distinguish between the two. This failure persists even today. Only four comprehensive studies of U.S. nuclear capital cost experience have been published. Three, produced independently of the nuclear industry, found that cost trends were running against nuclear power; the fourth, sponsored by nuclear utilities, blurred the conclusions.

The three independent studies measured nuclear and coal capital cost trends from the costs of completed plants. Bupp's Harvard-M.I.T. team found in 1974-75 that nuclear capital costs had increased two to three times faster than coal plant capital costs between the late 1960s and mid-1970s. William Mooz of the Rand Corporation demonstrated that the sharp increases in real reactor costs in the first half of the 1970s continued with no letup in the second half. In addition, another study between 1979 and 1981 found the average ratio between nuclear and coal capital costs increased from just over 1.0 at the start of the 1970s to over 1.5 at the end of that decade, even counting expensive pollution control devices such as sulfur dioxide scrubbers in the later coal plant costs. It also identified divergent nuclear and coal regulatory trends portending that the capital cost ratio would climb much higher, to a range of 1.75 to 2.65 for plants finished in the mid- to late 1980s.

Empirical data are validating this forecast. The nuclear/coal capital cost ratio for plants built in the 1980s is averaging between 2.0 and 2.5, far beyond the 1.2 to 1.4 break-even range-the capital cost ratio at which nuclear and coal generating costs are equal.

Nuclear power interests haven't so much rebutted these empirical analyses as ignored them, partly by sending up a smoke screen of studies favorable to nuclear power. These studies have been largely of two types: compilations of current coal and nuclear generating costs, with the sample of plants selected to put the best possible face on nuclear power; and "engineering estimates" that disregarded mounting regulatorv and construction problems in forecasting reactor costs.

The Atomic Industrial Forum Surveys

The annual surveys of the U.S. nuclear trade group, the Atomic Industrial Forum (AIF), purport to measure the current cost of electricity from nuclear, coal, and oil plants. Until recently, they obligingly found nucleargenerated electricity to be cheapest. For example, the AIF survey of 1978 costs put the average generating cost-fuel, operations, and capital charges-at 1.5 cents per kilowatt-hour (kwh) for the nation's nuclear plants and 2.3 cent/ kwh for coal plants. These figures, and similar ones for other years, were disseminated by the AIF, the Edison Electric Institute, and many utilities, and were widely reported by the press.

Two objections were raised to these surveys. First, they obscured emerging cost trends by lumping together economical early reactors with uneconomical later ones. Worse, although the surveys professed to be comprehensive, the samples were badly skewed to favor nuclear power. Twelve of the fourteen most expensive U.S. nuclear plants were omitted from the 1977 and 1978 surveys, as publicity-shy owners of expensive reactors withheld data. In addition, owing to a survey convention that limited the comparison to utilities with nuclear plants, only a small fraction of U.S. coal plants were included, and these tended to be above average in cost, since utilities build fewer reactors where coal is cheaper.

After considerable criticism, the AIF broadened its nuclear samples but not its samples of coal plants. However, in order to elicit cost data for expensive reactors, the surveys stopped grouping costs by utility. This precludes checking individual plant data-an important limitation in light of arithmetic inconsistencies in some AIF surveys and the inherent complexity of much of the cost data. Indeed, the current surveys are mere one-page press handouts, yet the AIF and other nuclear promoters represent them as in-depth evaluations of comparative costs.

The Department of Energy Estimates

For all their biases, the AIF surveys have not been nuclear promoters' most insidious misrepresentations of reactor costs. That distinction falls to a fifteen-year series of reports prepared for the Atomic Energy Commission-and its successor, the Department of Energy (DOE}-by Philadelphia-based United Engineers & Constructors ((JE&C;). These reports have formed the basis of the federal government's pronouncements on the economics of nuclear power since 1968, and they ha~,e had a profound influence on U.S. energy policy and utility investments.

The DOE reports rely on a procedure known as engineering estimation to predict future nuclear and coal plant costs. This technique first develops a conceptual plant design then calculates the labor, materials. and equipment needed to fulfill the design and applies estimates of wage rates and material costs to compute the total charges. Contingency allowances, typically 10 to 15 percent, are added to cover new safety criteria, strikes, delivery delays, or other problems likely to crop up in big construction projects.

Engineering estimation has failed spectacularly at nuclear plants. Since the early 1970s, inflation-adjusted capital costs of new plants have risen an average of 14 percent each year, consuming annually what was intended to be a contingency allowance for a project's whole lifetime. The root cause has been new and more stringently applied safety requirements that have expanded the scope of projects during construction, together with failure to manage construction to accommodate the increased stringency. Yet the DOE's hindsight has been no better than its foresight. The department has never retrospectively compared its nuclear cost forecasts to actual reactor costs, or otherwise acknowledged the persistent gap between its estimates and reality. Instead, DOE analysts endlessly fine-tuned their elaborate computer model that varied capital costs according to almost every conceivable assumption-geographical location, cooling tower type, turbine configuration, etc.--except for the conditions that were driving reactor costs skyhigh: unstable regulatory requirements, changing designs, lack of construction management-conditions well documented elsewhere by DOE's own cost contractors. Using this model, the DOE estimated in 1977 that reactors completed in 1986 would cost a mere $1.1 billion per thousand megawatts of capacity-half of what 1985-1987 plants were actually costing.

The DOE reports also overstated coal capital costs, contributing further to inaccurate perceptions of nuclear power's competitiveness. For most of the 1970s the DOE and other nuclear proponents assumed that costs of sulfur dioxide scrubbers and other pollution controls needed at the new coal plants would match growing nuclear safety requirements, keeping future coal plant costs close to those of new reactors. In fact, nuclear safety rules proved far costlier than coal emissions controls, and the average ratio of completed nuclear to coal capital costs grew from 1.05 in 1971 to 1.5 in 1978, even with scrubbers.

DOE's most egregious misestimate of the relative nuclear/coal capital cost came in 1980, when it doubled its 1978 nuclear and coal cost forecasts. This was appropriate for nuclear plants which faced regulatory impacts from the 1979 Three Mile Island (TMI) accident, along with record inflation and interest rates. Yet coal plants faced no new regulatory constraints that weren't reflected in the DOE's 1978 forecasts. Despite empirical evidence to the contrary, nuclear promoters continued to insist that the regulatory burden and capital cost escalation were equally severe for coal and nuclear power.

The 1980 report, with a 'future ratio of nuclear to coal capital costs no greater than the 1.5 to 1 ratio for plants completed in 1978, was especially critical. Utilities and the DOE used it to reassure wavering utility regulators and investors, that besieged reactor construction ventures were still worth completing. These assurances have since proven hollow for investors whose capital is at risk in several dozen expensive reactor projects, and for regulators who are now walking the edge between utility insolvency and sharp rate increases.

The DOE's 1982 estimates implied a nuclear/coal capital costs ratio of 1.6 to 1.7 which also lagged far behind changing costs. Yet these estimates remain the basis of the DOE's conclusion that nuclear plants ordered in the 1980s and finished in the 1990s will be competitive with coal. In fact, reactors being completed in the 1980s are averaging at least twice the capital costs of new coal plants and will probably average between 70 and 80 percent higher lifetime generating costs. Nevertheless, the DOE's optimistic conclusion is widely cited, particularly in international evaluations of nuclear power, as an authoritative portrait of relative nuclear/coal costs in the United States.

The DOE's estimator, United Engineers, for its part, has begun-a decade late-to back away from its insistence that nuclear power's precipitous capital cost escalation is no different from that for coal. "Recent information on material and labor requirements," admitted UE&C's chief cost estimator in 1982, "indicate[s] that... a higher rate of cost increase may be appropriate for nuclear plants."

Throughout the late seventies and early eighties, while rising costs and the TMI accident were prompting outsiders to look critically at reactor economics, the power industry continued to rehash its engineering estimates of capital costs. These estimates were divorced from nuclear power's deep-seated regulatory and construction problems and oblivious of the widening nuclear-coal gulf. The world's leading reactor constructor, Bechtel, projected a 1.21 nuclear/coal capital cost ratio just before TMI, increasing only to 1.25 afterward. The Committee on Nuclear and Alternative Energy Systems of the National Academy of Science, a senior nuclear-industry panel cast as impartial "scientists," predicted a range of ratios between 1.0 and 1.25. As recently as 1982, the year the Wall Street journal coined the expression "rate shock" to describe the cost impacts of new nuclear plants, the architectural and engineering firm Sargent & Lundy was still forecasting a capital cost ratio under 1.5.

Many of these sources seized on the shortening reactor construction periods as the means to control costs. Yet most of these savings are illusory, for they are won by making ratepayers pay for the power plant sooner.

Commonwealth Edisons Nuclear Program

Chicago-based Commonwealth Edison, the nation's largest reactor operator and builder, is also the industry's most vocal champion of nuclear power. A 1978 Edison article in Science touting the roughly 40 percent savings over coal for the company's six nuclear units helped shore up support for nuclear power in the academic and scientific communities. However, the comparison was biased by Edison's excess capacity, which leads it to operate its coal units part-time, thereby inflating their per unit fixed costs. The remaining savings were also peculiar to Edison in so far as its nuclear units cost only half as much as the U.S. average, while the company's coal fuel costs were well above those for most other utilities. The article's cost predictions for Edison's 1980s nuclear units were also strikingly inaccurate, under-predicting their real costs by at least a third.

Developing more rounds of nuclear capital cost estimates and comparisons seem pointless until nuclear power's institutional and technical status can be clarified so that cost estimates may be grounded in reality. Nevertheless, because nuclear power boosters show no inclination to wait, the following guidelines for future efforts in nuclear power economics are offered.

First, nuclear and coal costs must be weighed not only against each other but against the full spectrum of available electricity resources. This includes cogeneration, renewables, and improved energy efficiency. Orders for cogeneration and renewable capacity have surpassed those for nuclear and coal since 1982, and gains from improved efficiency, while harder to measure, have almost certainly been greater.

Second, capital cost data must be expressed in constant cost dollars, with financing costs added separately. The nominal, "as spent" dollars in which utilities report nuclear and coal plant costs are a mishmash of interest charges, inflation effects, and real costs-having accounting but not economic meaning. To distill cost trends or draw meaningful comparisons from such data, analysts must first convert them to constant terms.

Third, nuclear cost predictions should be benchmarked against empirical data (in constant dollars). While each construction project may appear unique to its builders, insightful statistical analysis of industrywide costs has proven successful at extracting cost trends, which can serve as baselines for evaluating future cost estimates. This is true for coal as well as nuclear plants.

Fourth, rising costs for reactor operations, maintenance, and improvements must be reflected in the forecasts of future nuclear "life-cycle" costs. Operations and maintenance costs and capital additions have also been effected by increased reactor complexity and regulatory stringency, and have grown almost as fast as reactor capital costs since the early seventies. They now average $75 million per year for each thousand megawatts of nuclear capacity, or three to four percent of original construction costs annually for new plants. Realistic assessments of operations and maintenance costs and capital additions are particularly important for weighing the economics of completed or nearly completed reactors, for which most capital costs have already been expended. Yet in shameless repetition of previous mistakes, most nuclear advocates dismiss past operation and maintenance costs and capital additions as a product of "one-time events" and a bygone era of regulatory change, and they project future costs to be half the present levels.

Fifth, differences between estimated future costs and recent cost experience-especially projections that future reactors will cost much less than those just finished-must be fully justified. In particular, expectations that reduced "regulatory turbulence" will cut reactor costs, must be fleshed out with analyses of the nature and cost of regulatory impacts.

The same applies to cost savings that nuclear advocates ascribe to new technologies and standardized designs. Increased use of microprocessors and robotics, and reductions in custom engineering, could conceivably mitigate problems in nuclear (and coal) design, construction and operation in such areas as accident analysis, regulatory evaluation, quality documentation, hazardous repair, and so on. But this is not an acceptable way to estimate the savings, especially given nuclear power's long trail of engineering misestimates. The nuclear industry will win a hearing on its hopes for competitive future reactors only by acknowledging the appalling level of current costs, and explaining why its future will differ from its past.


Charles Komanoff is director of Komanoff Energy Associates in New York and has authored three books and many reports on nuclear power. A version of this article appeared in the Fall, 1985 New England Journal of Public Policy, a publication of the John McCormack Institute of Public Affairs, at the University of Massachusetts-Boston.


U.S. Nuclear Reactors Cancelled Since 1980

Year Cancelled Project Capacity Utility
1980 Davis-Besse 2
Davis-Besse 3
Erie 1
Erie 2
Jamesport 1
Jamesport 2
New Haven 1
New Haven 2
Sterling
Haven 1
Greenwood 2
Greenwood 3
Forked River 1
North Anna 4
Montague 1
Montague 2
906
906
1,267
1,267
1,150
1,150
1,250
1,250
1,150
900
1,264
1,264
1,070
907
1,150
1,150
Toledo Edison
Toledo Edison
Toledo Edison
Toledo Edison
Long Island Lighting
Long Island Lighting
New York State Electric and Gas
New York State Electric and Gas
Rochester Gas and Electric
Wisconsin Electric Power
Detroit Edison
Detroit Edison
Jersey Central Power and Light
Virginia Electric Power
Northeast Utilities
Northeast Utilities
1981 Bailey
Pilgrim 2
Callaway 2
Harris 3
Harris 4
Hope Creek 2
644
1,150
1,120
900
900
1,067
Northern Indiana Public Service
Boston Edison
Union Electric
Carolina Power and Light
Carolina Power and Light
Public Service Electric and Gas
1982 WPN 4
WPN 5
Black Fox 1
Black Fox 2
Perkins 1
Perkins 2
Perkins 3
Vandalia
Pebble Springs 1
Pebble Springs 2
Hartsville B 1
Hartsville B 2
Phipps Bend 1
Phipps Bend 2
Aliens Creek 1
Cherokee 2
Cherokee 3
North Anna 3
1,218
1,240
1,150
1,150
1,280
1,280
1,280
1,270
1,260
1,280
1,233
1,233
1,233
1,233
1,150
1,280
1,280
907
Washington Public Power Supply System
Washington Public Power Supply System
Public Service of Oklahoma
Public Service of Oklahoma
Duke Power
Duke Power
Duke Power
Iowa Power and Light
Portland General
Portland General
Tennessee Valley Authority
Tennessee Valley Authority
Tennessee Valley Authority
Tennessee Valley Authority
Houston Lighting and Power
Duke Power
Duke Power
Virginia Electric Power
1983 Skagit 1
Skagit 2
Clinton 2
Cherokee 1
Shearon Harris 2
1,288
1,288
950
1,280
900
Puget Sound Power and Light
Puget Sound Power and Light
Illinois Power Co
Duke Power
Caroline Power and Light
1984 Wm H Zimmer
River Bend 2
Yellow Creek 1
Yellow Creek 2
Hartsville A 1
Hartsville A 2
Marble Hill 1
Marble Hill 2
810
940
1,375
1,375
1,287
1,287
1,330
1,330
Cincinnati Gas and Electric
Gulf States Utilities
Tennessee Valley Authority
Tennessee Valley Authority
Tennessee Valley Authority
Tennessee Valley Authority
Public Service Indiana
Public Service Indiana
Source: World Information Service on Energy


Table of Contents