April 2014 Volume 3, Issue 2
Minimizing Systematic Errors in Quantitative High Throughput Screening Data Using Standardization, Background Subtraction, and Non-Parametric Regression
Mitas Ray1*, Keith Shockley2, and Grace Kissling2
Student1: William G. Enloe High School, 128 Clarendon Crescent, Raleigh, NC 27610
Mentor2: National Institute of Environmental Health Sciences (NIEHS/NIH), 111 TW Alexander Dr., Research Triangle Park, NC 27709
*Corresponding author: firstname.lastname@example.org
Quantitative high throughput screening (qHTS) has the potential to transform traditional toxicological testing by greatly increasing throughput and lowering costs on a per chemical basis. However, before qHTS data can be utilized for toxicity assessment, systematic errors such as row, column, cluster, and edge effects in raw data readouts need to be removed. Normalization seeks to minimize effects of systematic errors. Linear (LN) normalization, such as standardization and background removal, minimizes row and column effects. Alternatively, local weighted scatterplot smoothing (LOESS or LO) minimizes cluster effects. Both approaches have been used to normalize large scale data sets in other contexts. A new method is proposed in this paper to combine these two approaches (LNLO) to account for systematic errors within and between experiments. Heat maps illustrate that the LNLO method is more effective in removing systematic error than either the LN or the LO approach alone. All analyses were performed on an estrogen receptor agonist assay data set generated as part of the Tox21 collaboration.
The Tox21 Collaboration is an interagency collaboration between the National Institute of Environmental Health Sciences (NIEHS)/National Toxicology Program (NTP), Environmental Protection Agency (EPA), the National Center for Translational Sciences (NCATS), and the Food and Drug Administration (FDA) that seeks to transform traditional toxicological testing based on animal testing, which suffers from high cost and low throughput, into cell-based assays utilizing technological advances to produce much higher throughput at a much lower cost1. This initiative is focused currently on screening thousands of chemicals across hundreds of cell-based assays using robotics2. A single experiment requires many 1536-well plates and usually consists of three replicate experimental runs across 15 different test concentrations. Each plate tests 1408 substances, where the remaining 128 wells use assay-specific positive and negative controls. Depending on the assay, the resulting signal is quantified using a single readout or multiple readout scanner.
For this study, data were used from 459 plates of an estrogen receptor agonist assay based on human ovarian carcinoma BG1 cells generated from Phase 2 of the Tox21 Collaboration. Estrogen, a member of the steroid hormone super family, plays important roles in many key biological activities in various organs in the human body including the reproductive tracts3. It has been reported that estrogen receptors (ER) are overexpressed in over two thirds of all ovarian cancer cases4. The cells contained a luciferase reporter gene which is used to determine if and to what extent a test chemical up-regulates the ER (i.e., is an agonist). Bioluminescence reporters, such as luciferase, generate light when activated and provide almost instantaneous measurements with high sensitivity5, making it an ideal reporter gene for this assay. The positive control used in this assay was beta-estradiol, and the negative control was dimethyl sulfoxide (DMSO).
The data generated from this assay may contain systematic errors from sources like reagent evaporation, liquid handling, decay of cells, and function of pipettes6. In addition, the chemicals being tested may affect neighboring wells through volatilization, or may autofluoresce, produce a signal strong enough to bleed into surrounding wells. These processes can produce row, column, cluster, and edge effects. A row or column effect is an error that affects an entire row or column, respectively, and makes measurements on the entire row or column appear uniformly higher (or lower) than they should be. A cluster effect, or spatial bias, is an error that affects a surrounding group of wells. An edge effect is an error that affects the edges of the plate.
To adjust for the effects of these errors, a linear (LN) normalization method, based upon standardization and background subtraction, and a locally weighted scatterplot smoothing method (LO), are discussed. We hypothesize that combining these two methods (LNLO) will be more effective in reducing systematic errors than either the LN or LO method alone.
Materials and Methods
Data from an estrogen receptor agonist assay used for the procedure described here was generated by NCATS. These data indicate the magnitude of the luminescence signal measured in each well, in arbitrary units, with higher values associated with greater estrogen receptor stimulation. Approximately 10,000 chemicals were tested in three replicate experimental runs. All data processing and analyses were conducted using the R programming language7. Heat maps were generated using the image() function in the R/graphics package. Plots were created from the plot() function and histograms were generated from the hist() function, also found in the R/graphics package.
Systematic errors may be removed using a process called normalization8. Linear (LN) normalization is done in a two-step process of standardization and then, background removal. Normalization within plates can be done with the mean centering and unit variance standardization method, using Equation 1:
where is the normalized data value at well in plate , is the raw data value at well , is the mean value of the plate and is the standard deviation of the plate6. Background evaluation is calculated using Equation 2:
where is the background value at well , is defined above, and is the total number of plates5. This background surface is then removed from each plate by subtraction.
On each plate, there were 16 positive control wells, 32 negative control wells, and 1408 wells of tested chemicals. Equation (1) was first applied to each of the 459 plates of the assay to provide within plate standardization. Equation (2) was then applied to each well to obtain a background value based on all wells in the assay. Finally, a normalized percent positive control value was obtained using Equation 3:
where is the normalized percent positive control at well on plate , is the normalized value at well on plate , is the mean of the normalized negative control per plate, is the mean of the normalized positive control per plate, and is the background surface at well . Controls were normalized through standardization only, using Equation (1), based on previous work6. Equation (3) produces our LN data. Note that Equation (3) is modified from application towards antagonist assays in previous work6 to make it applicable to agonist assays by subtracting the mean of the negative controls from the raw data point, rather than subtracting the raw data point from the mean of the positive controls.
Normalization using a nonparametric local regression, or loess (LO), method smooths the data by averaging neighboring values. The size of the neighborhood used for LO depends on the span9. A span value must be determined to represent the proportion of the total number of data points that will influence the local fitted value. A higher span results in a smoother curve, but also a less accurate predicted value. To determine the optimal span, the Akaike information criterion (AIC) was used. The AIC rewards accuracy of fit by using an average sum of squares of the residuals, but also penalizes increased complexity of fit10. The optimal span is one that minimizes the AIC value.
A loess smoothing technique (LO) was also applied to the adjusted raw data expressed as a percent of positive control using the loess() function in R, available in the R/graphics package. To determine the optimal smoothing parameter, or the span value, of the loess function, an AIC value was calculated from all the spans from 0.02 to 1.00 with hundredth increments for each plate. Span corresponds to the fraction of points used for the calculation. This LO method was applied first to the raw data, producing the LO normalized data. Afterwards, the LO method was applied to the LN data, yielding the LNLO normalized data. The four approaches examined here are outlined in Figure 1.
Heat maps were generated to compare raw data not subjected to normalization procedures to normalized data using the LN, LO and LNLO approaches for every plate of this assay. Heat maps provide graphic representations of the data where a color scheme indicates the magnitude of the signal in each well; this tool enables the visualization of effects of systematic errors. Each color in the color scheme represents a different range of values based on the range of values on the plate.
An example plate (plate 102) was chosen to illustrate our approach (Figure 2). Each plate in this Figure consists of 32 rows and 44 columns of data for the 1408 wells of chemicals being assayed. The data are graphically represented as a percentage of the positive control response (beta-estradiol at 2.3 ) where darker shades indicate lower luminescence and lighter shades indicate higher luminescence. On the raw plate, note the row effects, indicated by several darker rows in the lower half of the plate, and a cluster effect, indicated by the lighter area near the top of the plate. Plate 102 was first linearly normalized (LN). Notice that the darker rows in the lower half of the plate significantly increase in brightness, indicating that some of the row effect has been removed. For LO normalization, the optimal span value (lowest AIC value) was determined to be 0.08 (See Figure 3). The LO normalization approach was performed on both the raw and linear normalized data to produce the LO normalized data, and the LNLO normalized data, respectively. Using LO, the cluster effect near the top half of the plate was greatly reduced (Figure 2). The LNLO approach reduced both the row effects and the cluster effects.
The optimal span value determined for the LO normalization varied between 0.05 and 1.00 among the 459 plates (See Figure 4). The majority of the span values were 1.00, suggesting that many plates did not show cluster effects. LNLO normalization was effective in reducing row, cluster, and edge effects (See Figure 5). Many of the systematic errors presented in Figures 2 and 5 were also seen in other plates in this assay. The LNLO normalization worked well in reducing these systematic effects.
Quantitative HTS (qHTS) is a relatively recent technology and methodological development for data analysis is still at its early stages11. Currently, there is no literature comparing normalization methods for qHTS data; such methods are available for HTS data6,11 and microarray data8.
Once the data are normalized, connections can be made between certain chemicals and their potency12, a measure of compound activity, to induce the ER in a human ovarian carcinoma cell line. The initial connections between estrogen receptors and human ovarian cancer make this assay useful for identifying chemicals that bind to the estrogen receptor. However, for these data to be useful, data analysis procedures must successfully account for systematic errors.
While linear normalization methods such as LN described here can be effective for removing row effects or column effects, they may not be able to adequately correct for other types of systematic errors, such as leaking signal, that produce cluster effects. These cluster effects, or spatial bias, appear as prominent regions of the plate that produce higher or lower signals than the rest of the plate7. For instance, cluster effects are visible in Figure 2 and plate 139 of Figure 5. To minimize these effects, a non-linear normalization method was introduced, the LOESS method.
The systematic errors posed by qHTS data can hinder the potential effectiveness of these assays to complement traditional testing schemes. A new method to normalize raw qHTS data, LNLO, is proposed in this paper. We found that this method improves the quality of the data, by removing the effects of both linear and nonlinear systematic errors in visual representations. Such graphical approaches suggests that combining the two normalization methods would be more effective in removing systematic errors than applying only one of the normalization methods.
Although we do not claim that data normalized using our LNLO is completely free of systematic error, our approach moves the notion forward by showing the need for systematic correction and reduction of the effects on graphical plots. Future work is necessary to expand this procedure to further remove systematic errors and evaluate bias. Another further topic of research would be determining a way to quantify the effectiveness of a normalization procedure. As of now, a statistical quantity called the Z factor13 is commonly used to compare and evaluate the quality of raw qHTS data, based on the standard deviations and means of the samples and controls13. This only quantifies the acceptability of the data for further processing and cannot determine effectiveness of normalization methods. Therefore, more work is needed on methods for removing systematic errors and evaluating the effectiveness of normalization approaches in qHTS data.
Another area of further research is to evaluate the effectiveness of the normalization techniques on replicate chemicals. The replicates could be analyzed to observe whether the normalization techniques reduced the variability amongst the replicates. These potential observations could lead us to further reaffirm the effectiveness of our normalization methods.
Collins, Gray, Bucher (2008). Transforming environmental health protection. Science 319, 906-907
Tice et al. (2013). Improving the Human Hazard Characterization of Chemicals: A Tox21 Update. Environ Health Perspect 121, 756-765.
Li et al. (2013). Endocrine-Disrupting Chemicals (EDCs): In Vitro Mechanism of Estrogenic Activation and Differential Effects on ER Target Genes. Environmental Health Perspectives 121, 459-466.
Issa et al. (2009). Estrogen Receptor Gene Amplification Occurs Rarely in Ovarian Cancer. Modern Pathology 22, 191-196.
Allard, Kopish. (2008). Luciferase Reporter Assays: Powerful, Adaptable Tools for Cell Biology Research. Cell Notes 21, 23-26.
Kevorkov, Makarenkov. (2005). Statistical Analysis of Systematic Errors in High-Throughput Screening. Journal of Biomolecular Screening 10, 557-567.
Ihaka, Gentleman. (1996). R: A Language for Data Analysis and Graphics. Journal of Computational and Graphical Statistics 5, 299-314
Edwards, David. (2002). Non-linear normalization and background correction in one-channel cDNA microarray studies. Bioinformatics 19, 825-833.
Local Regression Models, Chapman and Hall. 309-376. Cleveland, Grosse, and Shyu. (1992)
Cohen, Robert. (1999). An Introduction to PROC LOESS for Local Regression. Proceedings of the 24th SAS ® Users Group International Conference 273.
Inglese et al. (2006). Quantitative high-throughput screening: a titration-based approach that efficiently identifies biological activities in large chemical libraries. Proc Natl Acad Sci USA 103, 11473-11478.
Shockley (2012). A Three-Stage Algorithm to Make Toxicologically Relevant Activity Calls from Quantitative High Throughput Screening Data. Environ Health Perspect 120, 1107-1115
Zhang, et al. (1999). A Simple Statistical Parameter for Use in Evaluation and Validation of High Throughput Screening Assays. Journal of Biomolecular Screening 4, 67-73.
We would like to thank Ms. Debbie Wilson, and all who made the Summers of Discovery program possible at the NIEHS. We would also like to thank Drs. Shyamal Peddada and Marjo Smith for reviewing my paper in the internal review process, and Dr. Raymond Tice for allowing us to use the raw data from the estrogen receptor agonist assay. This research was supported [in part] by the Intramural Research Program of the NIH, National Institute of Environmental Health Sciences (ZIA ES102865).