Tuesday, October 6, 2009

Monitoring Objectives - the first step

We monitor the environment to search for and document change.

What are the potential changes that matter most?

This is the first question we have to address in improving the efficiency of a monitoring program. Yet, it is often a question that goes unexamined in the years after installation of a remedy.

Every statistics text book covers experimental design and setting objectives for data collection to some extent - but the memory of these guidelines fades with the age of the project and becomes crushed into a monotonous routine over time.

Getting to the point, monitoring objectives for the network should be fresh. Objectives should be well articulated and connected to observable metrics, statistical analyses and specific monitoring locations. Most importantly, objectives should be reviewed regularly to see if they still fit the management decisions that must be made.

Based on USEPA documents, monitoring objectives for a plume fall into four general categories (USEPA, 1994; USEPA, 2004a).

· Evaluate changes in ambient physical conditions of the resource;

· Evaluate the movement and monitor the fate of chemical constituents;

· Evaluate compliance with regulatory requirements;

· Evaluate the effectiveness of the response action or remedy

I might add:

¨ Demonstration of compliance at property boundaries or institutional controls

Sure, one of your objectives is to “evaluate the efficacy of your remedy”, but have you really considered what kind of data will show your remedy is working? Chances are the remedy will not completely eliminate constituents of concern immediately, so you need to have metrics that reflect the gradual move toward cleanup goals. What do these metrics looks like?

Specific monitoring objectives should be developed to address each of the applicable general objectives. Specific objectives for long-term monitoring (LTM) programs generally have temporal and spatial components. The most common temporal aspect of LTM objectives is the evaluation of changes in contaminant concentrations at specific locations over time. Temporal objectives can be addressed by identifying trends in contaminant concentrations, or by estimating long-term summary statistics (mean, standard deviation) for concentration values (Zhou, 1996). An important aspect of establishing temporal objectives includes determining the sampling frequency for wells. Monitoring wells should be sampled at a frequency that captures changes in constituent concentrations without unnecessary or redundant sampling events. However, the frequency of reporting and management decisions are also critical. Will you have a statistically significant dataset to write your next report?

Objectives for monitoring constituents spatially are often related to predicting movement of constituents within the currently plume area or toward downgradient receptors. Spatial optimization is based on minimizing the uncertainty of predicting concentrations at critical locations within the monitoring network while eliminating redundant information. Spatial objectives are first addressed by identifying these critical monitoring locations. All wells are not created equal!

Each well should provide valuable (not redundant) information and all areas of interest should be represented by sampling locations. Areas that require no action or decisions can be removed from consideration before beginning the optimization. Frequently, response action documents will identify concentration ‘action levels’ at specific locations that may signal risk for downgradient receptors and trigger contingent response actions. The monitoring objective corresponding to this situation would be to predict, with confidence, any exceedences of the ‘action level’ at locations within the plume or locations that are surrogates for potential receptors.


In any case, establishing, reviewing and articulating monitoring goals is a critical first step in optimizing your monitoring program.

Tuesday, May 26, 2009

LTMO and Sustainability (part 1)

Sustainability is the new black.The concept of sustainability, as it applies to the environmental cleanup projects, centers on evaluating the impact of greenhouse gas generation or energy dependence of specific remediation processes. The tedious work of calculating the dependence of processes on fossil fuel input and other sustainability metrics will inevitably fall to Health, Safety and Environmental (HSE) departments and consultants. From my perspective, the practice of sustainability assessments is about where risk assessment was in the 1980's. As Herb Ward once told me in reference to risk assessment, it is a "dark art". The origins of the inputs, the construction of the formulae and meaning of results are not currently based on consensus science, as the science has yet to evolve. As with risk assessment during the 1980's, we are experiencing myriad approaches to calculating sustainability metrics with no single, reliable template for making these calculations. This is an exciting time, but can cause a lot of sleepless nights when you are assigned the task of assessing processes for 'sustainability' with no roadmap for conducting these audits. In the US, sustainability practice really crossed the Rubicon with the issuance of Executive Order 13423 Strengthening Federal Environmental, Energy, and Transportation Management in January 2007, when sustainability became an official policy with respect to federal agencies. (http://www.whitehouse.gov/news/releases/2007/01/20070124-2.html) Among other things, the executive order sets goals for federal agencies including improving energy efficiency, reducing greenhouse gas emissions, reducing water consumption (through life-cycle cost-effective measures), and instituting 'green' building standards. Notably, heads of agencies are tasked with the "collection, analysis, and reporting of information to measure performance in the implementation of this order". Where present, the stated goals (reduction of water consumption 2% annually) are not astounding. However, requiring a process for accounting for water and energy use and documenting 'sustainability' is a major step. Now, how does long-term monitoring optimization fit into the mandate for sustainability assessments? The process of groundwater monitoring may not, on the surface seem like a major environmental threat; however, once you consider the amount of effort that goes into mobilizing crews, sampling wells, disposal of purge water, laboratory analysis (with the associated chemical waste), and managing the data -- you can see where the impacts can be relatively large for the amount of information gained. Site managers that are able to reduce the frequency of groundwater sampling or reduce the total number of wells sampled while maintaining the same level of confidence in the size and concentrations within the plume can make major steps toward sustainability goals at legacy waste sites. Moving from quarterly sampling to annual sampling alone cuts sampling related impacts by three quarters. The key to accomplishing these reductions is the careful documentation of historic site data, articulating the goals of the monitoring program and identifying your future data needs. These will be topics of future postings.





Monday, March 10, 2008

Mann-Kendall Trend (part 1)

Judging from the technical support questions, most people use the MAROS software for the convenient groundwater trend analysis tool -- specifically the Mann-Kendall analysis.

I have never advocated the Mann-Kendall analysis is the One-True-Statistical method or the be-all and end-all; however, it is often a pretty darn good way to look at data -- especially groundwater monitoring data.

The reason why Mann-Kendall is a pretty good approach is that it is a non-parametric method, meaning that there is no assumption of a statistical distribution (i.e. normal distribution). Most groundwater data is not distributed normally, due to the problem of left censoring (no values recorded below the detection limit) and the occasional very high concentration, orders of magnitude above the detection limt.

Another annoying feature of most groundwater monitoring programs is the propensity of site managers to sample wells irregularly (i.e. quarterly sampling in 2000 and 2004, semiannual sampling in 2001 and 2003, monthly in 2002, not at all in 2005 and whenever in 2006 etc. . . .).

So, statistical analysis of groundwater data can be complicated by a variety of factors from the nature of the analytical results to the site management decisions. Luckily, the Mann-Kendall analysis of trend can handle these problems pretty well. Frankly, when your groundwater data starts to look as high and tight as medical monitoring data, you can look to other methods. Until then, we may all be stuck with Mann-Kendall to a certain extent.

The way the Mann-Kendall analysis is handled in MAROS is a little different from other methods (Gilbert, 2987), and may require a little explanation.

The Mann-Kendall trend evaluation relies on three statistical metrics -- the 'S' statistic, the coefficient of variation (COV) and, what we call the confidence factor (CF).

This third critical statistical metric in the Mann-Kendall evaluation can cause a lot of confusion, as the CF represents a small modification of the usual approach to the Mann-Kendall analysis. The CF is a measure of confidence in rejecting the null hypothesis.

The null hypothesis (H0) states that the dataset shows no distinct trend. The Mann-Kendall method tests H0 against the alternative hypothesis (HA) -- that the data show a trend. The probability (p) of accepting H0 is determined from the Mann-Kendall table of probabilities, based on the number of samples (n, for n less than 40) and the absolute value of S. Specifically, p is the probability of obtaining a value of S equal or greater than the calculated value for n when no trend is present. We will reject H0 when p < alpha =" 0.1)." face="arial">
Typically, the Mann-Kendall test results in ‘No Trend’, or ‘Increasing’ or ‘Decreasing’ designations for the dataset. However, in order to develop a finer resolution of outcomes, the concept of ‘confidence factor’ has been developed.

We define the CF as (1-p) %. The CF is inversely proportional to p, and directly proportional to both S and n. When the CF is below 90% (p > 0.1), H0 is accepted, and the data are judged to show no distinct trend. For the method used by MAROS, data showing no trend can be classified in one of two ways -- Stable or No Trend. A ‘Stable’ result occurs when S <>1) and an S of any value.

When the CF is between 90 – 95% (0.1 > p > 0.05), H0 is rejected, but the trend is weak. The weakness of the trend is identified by using the terms “Probably Increasing” or “Probably Decreasing” to describe the data. For CF > 95% (p <>

By using the method described above, data from each well location can be categorized in one of 8 ways: Increasing trend (I), Decreasing trend (D), Probably Increasing trend (PI), Probably Decreasing trend (PD), Stable (S), No Trend (NT), non-detect (ND) or insufficient data to determine a trend (N/A) (for n less than 4).

As an example, consider a dataset with 12 sample events with an S statistic of -26. In the Mann-Kendall table, the p value for n=12 and abs[S] = 26 is 0.043. The coefficient of variation in the dataset is 0.65.

So: CF = (1 - p) = (1 - 0.043) 0.957 or 95.7%
S = -26
COV = 0.65

Conclusion: Decreasing Trend

With a CF of 95.7% we have very high confidence in rejecting the H0. The probability of accepting the null hypothesis is only 4.3%, well below our standard of 10%. So, we conclude that the data show a strongly Decreasing trend.

The power of a statistical test can be calculated from the number of samples (n), the variance in the data, alpha (or the false positive rate), and the critical effect size. For a specific statistical evaluation, the critical effect size can be difficult to identify. In the case of groundwater data, the practical quantitation limit is often designated as the critical effect size. Given these parameters, the practitioner has control only over the sample size and the detection limit.

Increasing n increases the power of the statistical analysis as long as variability in the data remains fairly low. For the Mann-Kendall analysis described above, increasing n can increase the CF, as long as the magnitude of S stays relatively consistent. Increasing the sample size can require time, as the sampling interval should be sufficient to produce independent-ish samples.

More on this later!





Thursday, March 6, 2008

LTMO Unleashed

After many years of conducting Long-Term Monitoring Optimization (LTMO) site evaluations, supporting LTMO software, conducting trainings, answering technical support questions, and sitting through hours and hours of meetings . . . I have decided to start posting LTMO information on the web. I hope that some of the insights and observations I post here can help those of you laboring in the groundwater monitoring trenches.

By way of explanation, Long-Term Monitoring in this blog will refer to monitoring of groundwater affected by common chemical contaminants (TCE, PCE, metals, BTEX, munitions, etc.). Information posted here is not necessarily applicable to initial site characterizations, but is intended as guidance for those with well characterized sites and lots and lots of data. I will cover some of the statistical methods used to support site decision-making and software tools available. I also plan to provide observations on qualitative evaluations, which can often appear subjective and confusing.

This blog will not cover other forms of monitoring, such as air or health -- I leave that to others. I will be posting whenever I feel like it, or whenever I have the time. No promises.

Onward.