Radioactivity and Other Risks (Part 1)

Since shortly after the nuclear accident at Fukushima Daiichi, we have been writing about various topics related to understanding that event. This week, we’re thinking about radioactivity and risk, though not only the risk that radioactivity—and exposure to it—poses. We’ll work our way back around to that on Friday. But we want to contextualize that particular risk (a risk of exposure to radioactive particles) within how we look at risk generally and how different risks are related. One thing borne out time and again is that the miscalculation and mismanagement of risk by institutions can have profound consequences.

May 4: Earthquakes in last 8-30 days

Much reporting has focused on the fact that the machinery of Japan’s nuclear power plant survived the 9.0 earthquake intact and functioning, at least according to the current thinking about how the accident unfolded. The machinery, however, wasn’t designed to withstand an earthquake of that scale because those who assessed the risk that earthquakes posed for that nuclear plant did not think a 9.0 was probable enough to occur that it warranted designing for what ultimately did occur. Though we may find otherwise when (or if) people are able to see the machinery up close (perhaps via remote camera as with Chernobyl), the nuclear reactors seem to have exceeded their design specifications for earthquakes. Despite the wherewithal of the nuclear reactors, the facility had not adequately prepared for the tsunami, one which left the emergency generators vulnerable and, therefore, left the reactors without enough power to circulate cooling water.

According to an article in the Bulletin of the Atomic Scientists, “Japan had a 30-foot-high tsunami from a 7.8 earthquake on the west coast in 1993.” Eighteen years ago, a smaller earthquake than this year’s caused a wave that would have overwhelmed the Fukushima Daiichi plant, but “the word ‘tsunami’ did not appear in government safety guidelines until 2006. […] Despite the lack of government guidance, the initial plant designs for Fukushima Daiichi did take tsunamis into account. But engineers expected a maximum wave height of 10.5 feet, so the plant was thought to be safe sitting on a 13-foot cliff, and that was apparently the end of the matter.” A tsunami was not an unknown risk. But the engineers had miscalculated the probability of the event and, therefore, had not taken appropriate steps to manage the risk that tsunamis actually pose to the west coast of Japan.

Bear Stearns former NYC offices (Photo by David Shankbone)

The failure of institutions to properly calculate risk also played an outsized role in the most recent financial panic. The roots of this faulty risk assessment go back to 2004 when five financial institutions—Bear Stearns, Lehman Brothers, Merrill Lynch, Goldman Sachs, and Morgan Stanley (see NY Sun article)—were successful (it’s hard to believe that that’s the correct word, but there you are) in petitioning the SEC to have the amount of debt that they could take on substantially increased. At the time, this was an acknowledgement that the SEC would be abdicating its prior responsibility for risk assessment and leaving it up to the individual institutions themselves. Here’s a quote from a New York Times article that sums it up: “In loosening the capital rules, which are supposed to provide a buffer in turbulent times, the agency also decided to rely on the firms’ own computer models for determining the riskiness of investments, essentially outsourcing the job of monitoring risk to the banks themselves.”

A lone letter to the Federal Reserve warned that the software model that the institutions used to calculate risk might not be up to the task. In a sinister bit of irony that reminds us of our all too frequent hubris when confronted with things we only pretend to understand, this same letter invokes the language calculating the risks associated with a 100-year flood. In less than four years, three of the five successful (there’s that word again) petitioners were no longer independently viable institution, Merrill Lynch having been gobbled up and Bear Stearns and Lehman Brothers turning into funerary pyres of taxpayer dollars. In the end, the one-two failures of Bear Stearns followed by Lehman Brothers were the opening band for the laser-light show financial calamity that ensued, whizzing images and crunching power chords distracting us from the fact that Wall Street had no intention of changing the way it takes risks with our money.

We just returned from Florida, where managers at NASA delayed a space shuttle launch because they perceived excessive risk in a redundant (that means it was NOT the only one doing the job) heater line. So we can’t help but apply our notions about risk to space exploration as well. And it turns out that including the space shuttle in this discussion is also our way back into the topic of radioactivity and risk in Part 2 of this piece scheduled for Friday.

One thing that we know about manned spaceflight is that it’s a risky business. In press briefings, especially those used to disseminate information about why there is a launch delay, Kennedy Space Center (KSC) officials point out that astronauts know there always exists risk. It’s clear that KSC decision-makers use detailed criteria—about weather here and abort sites and about all the various systems of the orbiter, the external tank, and the solid rocket boosters—to determine whether a launch is a go. They calculate risk constantly, have a level of acceptable risk they are willing to take, and manage risks as they shift in order to come in at or under that acceptable level for every launch. When they do miscalculate the risk, it’s usually not catastrophic. In fact, this week’s problem with the heater line, had it been discovered in orbit, would have necessitated a few tasks, basically to disable and make safe the parts. In and of itself, that problem would not have affected mission success.

Challenger STS-51L Crew

But twice, managers mismanaged the risk to space shuttles in ways that led to catastrophe. Much has been written about the o-rings and cold launch temperatures for Challenger’s final launch and about heat protection tiles and tank foam for Columbia’s last launch, so we won’t recap that here. (See Guest Blogs on this topic HERE and HERE and HERE.) Instead, suffice it to say that NASA managers do not eliminate all risk, but they are charged with limiting known risks and with delaying launch when the risks are not fully understood.

We’ll break here until Friday, when we’ll get back to these notions of known and unknown risk, limiting not eliminating risk, and radioactivity in particular.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s