Volume 37, Issue 10 pp. 1799-1801
Editorial
Full Access

From the Editors

First published: 05 October 2017

SHOULD A LINEAR-NO-THRESHOLD MODEL BE THE DEFAULT ASSUMPTION?

Should dose–response models for noncarcinogens and for carcinogens that act via nonlinear modes of action adopt the linear-no-threshold (LNT) default assumptions that risk is proportional to dose at low-dose levels? Various authorities have debated this question for decades. Entire alternative schools of thought, such as that hormesis (J-shaped or U-shaped dose–response relationships) should be the default assumption instead, have been proposed by researchers dissatisfied with the LNT assumptions. Yet, the LNT default assumptions are widely endorsed in regulatory risk assessments. They were recently recommended by a National Research Council (NRC) committee for dose–response assessment of both noncarcinogens and carcinogens with nonlinear modes of action. This recommendation was based in part on a belief that nonlinearities at the individual level could still be consistent with LNT assumptions at the population level due to interindividual heterogeneity in dose–response relations. Writing in this journal, Ken Bogen challenged the detailed reasoning behind the NRC, recommendation showing that both the argument that additivity-to-background (AB) implies LNT and the argument that substantial interindividual heterogeneity (SIH) implies LNT do not hold in general.

In this issue, Kenny Crump, one of the founders of modern quantitative cancer risk assessment, continues the conversation with a perspective that comments on the LNT assumptions and Bogen's critiques. Crump proves that Bogen's critique of SIH holds more generally and confirms that the SIH argument for LNT is indeed generally false. However, he observes that Bogen's critique of the AB argument does not specify what specific distinct mechanisms might explain background and exposure-related risks, and opines that it is plausible to assume that low-dose slopes and high-dose slopes are related, in the absence of information to the contrary, even though there is no logical necessity that this be true, as shown by Bogen.

Bogen replies with a detailed consideration of two well-characterized protective responses to cellular stress, heat shock response, and antioxidant response element activation. He argues that detailed dose–response data for these do not fit the LNT assumptions, making it clear that the LNT paradigm is inappropriate in at least some important real-world cases in practice as well as in principle. The same examples show that slopes at high doses do not predict slopes at low doses. The editors welcome further original discussions of the LNT paradigm, alternatives to it, and when data and evidence warrant either using specific alternative default assumptions or refraining from making LNT default assumptions.

ACCEPTABLE RISK

Taebi stresses the distinction between de facto social acceptance of risks from new technologies, and ethically principled acceptability of those risks: a risk might be accepted in fact even if is not acceptable according to some ethical principles, especially if potential adverse consequences are borne by others remote in time (future generations) or location (members of other nations). The author proposes a Rawlsian framework to help bridge the gap between accepted and acceptable risks in a morally defensible way and considers decision making about a multinational nuclear waste repository as a practical example.

Borgonovo and Cillo examine a large class of risk-informed decision problems with applications that include siting nuclear waste repositories, maintaining chemical facilities, and planning space missions, in which thresholds for “acceptable” risks play a decisive role. They introduce methods combining decision-analytic value-of-information (VOI) with reliability-theoretic importance measures to determine how variations in systems, structures, and components affect the assessment of risk metrics as exceeding or not exceeding a desired decision. This blend of ideas from decision analysis, reliability, and financial risk analysis (VOI satisfies the axioms of a coherent risk measure) provides useful new measures of decision-relevant importance for both individual basic events and groups of events.

STOCHASTIC AND SIMULATION MODELING FOR RISK ASSESSMENT AND RISK PERCEPTION

Probabilistic, or stochastic, simulation modeling is a fundamental tool for quantitative risk assessment, providing a useful alternative to explicit probability calculations and mathematical analysis for understanding and modeling the behaviors of complex systems with uncertainties described by probability distributions and conditional probabilities. More generally, modeling the distribution of individual exposures, sensitivities, and behaviors is often essential for understanding population-level risks. Several papers in this issue advance the theory and application of stochastic modeling and simulation in risk analysis. Costa et al. develop a stochastic model of the spread of Salmonella during pig meat inspection. They find that inspection activities can increase prevalence while reducing microbial loads, which drive risk. Shao et al. present a Bayesian hierarchical modeling approach to quantifying uncertainty and variability of exposure–response relationships across different populations using Markov Chain Monte Carlo (MCMC) sampling. They illustrate the approach by quantifying the posterior distribution of relative risk estimates implied by fitting various parametric risk models for the association between cardiovascular disease mortality and inorganic arsenic in drinking water. Berner et al. give a constructive demonstration of how to use verified and validated simulation models to identify potential “black swan” events—previously unthought-of but, after they are identified, not implausible combinations of events that could lead to much larger losses than any previously experienced. They show how to use this simulation-based approach to identify conditions leading to potentially very large losses in electric power networks during hurricanes. Identifying such potential black swan events before they occur reduces the potential for a negative surprise and helps plan defensive actions to prevent or reduce extremely large losses. On a time scale of decades, Lozano et al. apply wildfire simulation models and regional climate models and scenarios to project how warmer weather that leads to drier fuels could contribute to longer fire seasons and increased frequency and severity of wildfires in Italy and Corsica several decades from now, between 2041 and 2070

Tixier et al. combine natural language processing of construction injury reports to extract predictively useful features with nonparametric risk estimation methods to simulate large data sets, faithful to the original data; these typically have heavy-tailed distributions for impacts of accidents. The authors emphasize the potential advantages of transitioning from qualitative, opinion-driven assessments to data-driven assessments in understanding and increasing construction safety. Haer et al. develop an agent-based model (ABM) of household decision making about flood risks, as discussed further below. Finally, Kandiah et al. use an ABM of the dynamics of risk perception and social amplification of risk to study how public opinion about water reuse varies with factors such as frequency of discussion and the structure of social networks through which opinions are exchanged and modified. An important aspect of this effort is that the parameters defining how each household (“agent”) updates its opinions and behaviors are not simply modeling assumptions, but are estimated empirically from survey data.

Although simulation modeling provides powerful tools for probabilistic risk assessment and forecasting—and more recently, via ABMs with empirically estimated behavioral and adaptation rules, for improved understanding of the dynamics of risk perception and risk communication—uncertainties about relevant causal processes and probabilities can inhibit the development of well-validated simulation models. If not enough is known about a system or situation to simulate it convincingly, then what other tools can a risk analyst use to inform risk management decision making? Wu et al. present one option: an evidential reasoning framework that integrates elements of fuzzy set theory and Dempster-Shafer belief functions to represent uncertainties arising from vagueness, incomplete knowledge, and ambiguous evidence. They apply it to generalize previous cognitive reliability and error analysis method (CREAM) modeling of human reliability and error probabilities in the context of maritime accidents caused by human error.

FLOOD RISK ANALYSIS

Three papers in this issue address aspects of flood risk assessment and management. Serinaldi and Kilsby demonstrate a constructive way to tackle the challenge that less data are available on large, rare floods to help understand spatiotemporal correlations among losses at different affected locations and over different time windows in quantifying collective risks from floods. Their key innovation, somewhat analogous to the use of precursor event data to improve the analysis of low probability high-consequence accidents in complex industrial systems, is to use analyses of spatial and temporal interdependencies in the relatively abundant data from continuous stream flow records to augment the analysis of much more limited data from extreme floods. They find that dynamic copulas can be used to model successfully the spatial dependence of losses on multiple time scales in data from 10 large river basins in central and Eastern Europe.

Haer et al. develop and apply an ABM of household decision making about insurance purchases and investments in loss-reducing measures under uncertainty about low-probability/high-consequence flood risks. Household decisions are modeled using expected utility theory, prospect theory, and Bayesian prospect theory with subjective risk perceptions. These models predict significantly different trajectories of investments in loss-reducing measures, and hence different flood risks, with expected utility (rational) decision models leading to less investment and higher risks than if low-probability/high-consequence events are overweighted, which may be more psychologically realistic. However, insurance premium discounts stimulate predicted risk-reducing investment decisions most for the expected utility model. The ABM simulations are used to study how long-term flood risk depends on assumptions about how social interaction, media influence, and experience with floods affect household risk perceptions.

Bessette et al. consider flood protection as one component among many in a “values-informed mental model” of how individuals in New Orleans understand climate change associated risks and risk management. Other values include economy, sense of place, ecosystems and biodiversity, quality of life, and distributive justice, among many others. Support for alternative climate change risk mitigation strategies, such as relocating residents, rebuilding land, adopting renewable energy technologies, or raising levees reflects both beliefs and values, and a mental model that incorporates both can clarify gaps in understanding and gaps in values between members of the public and experts or other stakeholders that must be bridged to facilitate improved communication and risk management deliberation.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.