Volume 31, Issue 11 pp. 1784-1788
Perspective
Full Access

Ecological Risk Assessment, Prediction, and Assessing Risk Predictions

Mark Gibbs

Corresponding Author

Mark Gibbs

CSIRO Marine Laboratories, 233 Middle St., Cleveland, Queensland 4163, Australia; [email protected].Search for more papers by this author
First published: 30 March 2011
Citations: 17

Abstract

Ecological risk assessment embodied in an adaptive management framework is becoming the global standard approach for formally assessing and managing the ecological risks of technology and development. Ensuring the continual improvement of ecological risk assessment approaches is partly achieved through the dissemination of not only the types of risk assessment approaches used, but also their efficacy. While there is an increasing body of literature describing the results of general comparisons between alternate risk assessment methods and models, there is a paucity of literature that post hoc assesses the performance of specific predictions based on an assessment of risk and the effectiveness of the particular model used to predict the risk. This is especially the case where risk assessments have been used to grant consent or approval for the construction of major infrastructure projects. While postconstruction environmental monitoring is increasingly commonplace, it is not common for a postconstruction assessment of the accuracy and performance of the ecological risk assessment and underpinning model to be undertaken. Without this “assessment of the assessment,” it is difficult for other practitioners to gain insight into the performance of the approach and models used and therefore, as argued here, this limits the rate of improvement of risk assessment approaches.

1. INTRODUCTION

Ecological risk analysis and assessment are now both a fundamental approach and set of tools used in natural resource and environmental management globally. In fact, it can be argued that risk management approaches embedded in adaptive learning cycles have become the dominant environmental management framework in many jurisdictions. Such a framework demands the explicit consideration of the local and cumulative impacts of environmental management interventions and development activities. The importance and profile of adaptive risk-based approaches also predicates a need to ensure that the learning from the ever-increasing number of ecological risk assessments (ERAs), predictions, and underpinning models occurring globally are communicated and transferred so that best practice occurs wherever and whenever possible.

Technology transfer and learning from ERAs typically occur through two mechanisms. First, the results and learning from individual ERAs are presented in the international ERA literature and at practitioners' workshops and conferences. Second, learning can occur through the transfer or secondment of staff into new organizations undertaking ERAs or through working on new projects. Consider the first transfer mechanism—through the literature or professional practice workshops—as this is the focus here. Ideally, what practitioners and researchers require from ERAs performed elsewhere are details of the methods and approaches used, for example, specific models, and particularly the efficacy of these approaches and tools as without the latter, there is no way other practitioners can robustly make decisions on which approach or technique to take when multiple tools are available. In one sense the sheer numbers of, say, publications using a particular approach, method, or model provide information on the efficacy of the approach in question although it is also fair to say that ERA, like all scientific disciplines, does have fashions and fads and any particular approach is generally not suitable for each and every problem. Similarly, even the same specific numerical model configured for different locations can produce results of varying accuracy. Hence researchers and practitioners often have choices over methodologies and approaches to apply to particular problems. By contrast, it is argued here that while the global literature is very effective and timely in terms of delivering the information on the range of available approaches and problems they have been applied to, in some cases dissemination through the literature is presently less effective at delivering timely information on the performance or efficacy of a particular approach, method, or model as applied to a particular problem. This is particularly the case for large infrastructure projects that are often granted consent to proceed on the basis of a risk projection from a formal quantitative ERA based on the predictions from some sort of model.

Formal risk assessment aims to use all available data of an existing risk in a systematic framework in order to evaluate the details and dynamics of the preexisting risk. If performed diligently and using the best data available, then presumably this will provide the best estimate of the risk. By contrast, where an assessment is used to create a prediction of a possible but not yet present risk, then the best data of the relevant processes integrated into an appropriate framework or model will provide the best prediction or estimate of the potential risk, and consequences. However, to truly assess the risk or the accuracy of the prediction, a follow-up study utilizing monitoring data of the impacts of the activity and a reflection on the accuracy of the model used is required and ideally the results of the assessment of the prediction are also disseminated for other practitioners to learn from. It is argued here that while postconstruction environmental monitoring following large infrastructure development projects is increasingly commonplace, this last step, whereby a specific assessment of the predicted or perceived ecological risk is assessed in an independent study after an activity is performed, is scarce in the literature and this paucity ultimately restricts the uptake of ERA approaches (see Landis et al.(1) for one of the few examples). In other words, while the literature does contain comparative studies that investigate the general performance of alterative assessment methods and tools,(2) there is a conspicuous paucity of studies that assess the accuracy of the risk prediction upon which major infrastructure development application decisions were based. Without this post hoc assessment of the risk assessment or prediction it is difficult for other practitioners to evaluate how accurate the approach or model was in the particular example, and hence how useful the approach and individual models may be for other situations.

2. MISSING LINKS

Adaptive management(3) utilizing formal risk assessment procedures is becoming the framework of choice for environmental management. Active adaptive management is the strongest form of adaptive management and involves the deliberate experimentation with alternative environmental management approaches, preferably simultaneously, and then updating the conceptual understanding of the risk and how the natural system responds to management interventions. Passive adaptive management is a more common and milder approach whereby the risk of a single intervention or management approach is predicted, and then monitored, and the results from environmental monitoring used to update both the conceptual understanding and the management intervention approach that was applied.

Fig. 1, shows a schematic representation of a passive adaptive management framework incorporating a formal ERA procedure. In this process a development proceeds, or a management intervention occurs, and the impacts on the environment are monitored. Monitoring results may lead to changes in the management of the impacts (loop with solid arrows). The consent to perform the development is based upon the formal predictions of risk produced from an ERA that is commonly underpinned by some form of model. In this schematic the risk prediction generated from the risk assessment is delivered to the practitioner literature (Fig. 1). However, as this occurs at the start of the process the global literature is not provided with any systematic assessment of the accuracy of the formal risk prediction that was used to underpin the development decision.

Details are in the caption following the image

Schematic diagram of risk management process including proposed new feedback loop.

It is argued here that the steps identified in the dashed lines in Fig. 1 need to be taken in order to not only update knowledge of the effectiveness of the management intervention and our understanding on the validity of the original conceptual model (the philosophy behind adaptive management), but also the efficacy and accuracy of the formal risk assessment itself. Without this key information risk practitioners elsewhere have no way of knowing how accurate the original risk prediction method was and hence no way of robustly judging how effective the approach, method, or model is.

Interestingly, in other risk assessment and prediction domains the culture of “assessing the assessment or prediction” is well established. A good example of this would be in the field of numerical weather prediction (NWP). The NWP process generally involves the deployment of a predictive model that assimilates a snapshot of the global weather at determined periods (often every 6 hours). The model, with assimilated data, is then used to make a prediction of future conditions and the risk of likely specific synoptic events and scenarios.(4) A standard approach is then to routinely use the next global weather snapshot (which is equivalent to monitoring after an infrastructure development) to compare against the predictions based on the previous snapshot. By doing this researchers are explicitly testing their risk prediction or model against another independent data set in the future. These so-called skill-tests are routinely performed and, importantly, reported in the international literature. Thus other NWP practitioners can routinely determine how the original risk prediction approach or tool performs; a step that is presently not common practice in ERAs.

So why is it not routine practice (as demonstrated by the paucity of such post hoc assessments in the literature) to perform a post hoc assessment of ecological risk predictions or the specific models as applied? A large number of individual ERAs are now routinely performed for specific developments.(5) It is now common for these ERAs to be performed by third parties such as consultants and the results of these predictions often determine whether a development or management intervention proceeds or not. While it is becoming more common for such developments and interventions to be accompanied by environmental monitoring programs, the results from these programs are mostly, if at all, used only to update the management intervention (Fig. 1). In other words, it is argued here that there often seems to be little incentive to take the next stage and use these data to assess the original risk prediction or accuracy of the underlying model that was used to allow the development to proceed in the first place. This means that a potential learning from applying a particular approach or model is lost to other practitioners facing similar problems.

One of the primary reasons for this lack of willingness to undertake post hoc assessments of the performance of the risk assessment is presumably a result of the third-party arrangements commonly used for risk assessment.(6) In such situations it can be perceived that there is little direct incentive for the developer or proponent of the activity to fund additional assessments of the original risk prediction. For example, if it turns out that the original assessment was reliable, then this simply reconfirms the decision by the management authority to grant consent to the activity, which has already been consented on the basis that the risk prediction was accurate. If a post hoc assessment of the risk prediction suggests that the original prediction was substantively inaccurate, then this will presumably be disadvantageous for the developer or activity proponent and the reputation of the managing authority. Hence the only incentive for a third-party provider to undertake such an assessment would if there were grave concerns over the accuracy of the assessment, or there was commercial value to be gained from demonstrating the accuracy. However, the default assumption of a best-practice standard assumes that any assessment from a reputable provider will in fact be best practice. This highlights a fundamental problem with such a standards-based approach in that once a standard is achieved, there is often little incentive to deliver above the standard.

Similarly, if the third-party consultant directly funds the assessment of the risk prediction and it proves to be accurate, then it simply reinforces the position taken by the risk assessment provider that was accepted by the management authority. If the assessment of the prediction finds that the risk prediction was in fact inaccurate, then the risk assessment provider is hardly likely to promote this inaccuracy in the literature or outside the domain of the organization. Hence there is often a disincentive to perform an analysis of the original risk prediction unless there is a strong culture of this approach, such as in NWP.

In other words, while the literature does contain examples whereby independent researchers have compared the accuracy and effectiveness of different risk assessment and prediction approaches, it is argued here that these are rarely directly related back to specific predictions that formed the basis on which decisions on major infrastructure projects were made. Hence closing the loop between an actual risk prediction of the impacts or risks of a particular activity or development (Fig. 1) and the actual risks and impacts after the activity has commenced are rarely presented in a coherent package. Such an assessment would contain more than the details of, for example, the model configuration used in the assessment. Rather, it would be desirable that the model predictions be directly compared against monitoring data of the risk collected after the development or management intervention has taken place.

3. SUMMARY AND CONCLUDING REMARKS

Rapid globalization over the last two decades has resulted in the mass transfer of technology from developed to developing nations. The transfer of technology also implies a transfer of risk and a number of nations have now become new risk receivers as a result.(7) In order to manage these sometimes-new risks, it is important that ecological risk practitioners have access to up-to-date information on the efficacy of not only the risks of particular activities, but also the performance of specific ERAs and especially risk prediction approaches and models, that is, the tools to predict and assess risks. Without a reliable analysis on how accurate and effective a prediction of a particular risk was, it is difficult to first establish a best-practice regime and, second, difficult for individual practitioners elsewhere facing the same problem to gain an understanding on how accurate their prediction may be using the same model or technique. To this end, it is argued here that as a community of risk professionals and practitioners there is an opportunity for us to focus our attentions to explicitly building in an “assessment of the prediction” feedback link into especially the risk prediction processes.

It can be speculated that a major reason why this step rarely occurs is because appropriate incentives are absent. Therefore, the challenge is for us to create the culture and incentives to build in this additional assessment phase wherever possible. One mechanism for this to be achieved is for third-party risk assessment practitioners to undertake the post hoc assessment internally and where good predictions were made, to report these in the international literature. Clearly, commercial risk assessment providers will be incentivized to not report on poor performance. However, the very act of nonreporting will lead to the desired outcome if best practice is to report good outcomes. This step will add costs to risk assessment services. However, such an approach is in the interest of professional risk assessment providers as it will act to increase the standard, and community confidence in risk assessments.

Regulating agencies and natural resource management authorities may also wish to call for increased post hoc assessments. Such a process would act to highlight unaddressed risks remaining after the development or management intervention has been performed, while also acting to mitigate exposure to future litigation in the event of, for example, extreme events or catastrophic infrastructure failures.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.