Quality Assurance Indicators for Immunoassay Test Kits - ACS

Oct 23, 1996 - Generic indicators are requirements which are common to all analytical data-generation methods. Core indicators are method-specific req...
1 downloads 8 Views 1MB Size
Chapter 22

Quality Assurance Indicators for Immunoassay Test Kits 1

2

2

William A. Coakley , Christine M. Andreas , and SusanM.Jacobowitz Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

1

Environmental Response Center, U.S. Environmental Protection Agency, and Roy F. Weston/REAC, 2890 Woodbridge Avenue, Edison, NJ 08837 2

Increasing costs associated with environmental site investigations have led to the emergence of various field screening techniques to streamline the process and help reduce analytical costs. In keeping with this trend, the U.S. Environmental Protection Agency (U.S. EPA), Environmental Response Team (ERT), is currently employing immunoassay test kits at a variety of sites. Critical to using these test kits are the Quality Assurance (QA) indicators used to establish data of known and acceptable quality. When considering QA indicators of confidence for the test kits, consider both generic and core indicators. Generic indicators are requirements which are common to all analytical data-generation methods. Core indicators are method-specific requirements established just for the immunoassay test kits. Criteria must be included as part of the Q A evaluation when determining overall quality of the data. This paper discusses how to apply these QA indicators to generate data of known and acceptable quality for immunoassay test kits.

Increasing costs associated with conducting environmental site investigations have led to the emergence of various field screening techniques to streamline the process and reduce analytical costs. These field screening techniques are typically procedures capable of providing the project manager with near realtime data, at lower costs than those incurred with standard laboratory analytical methods. Lower analytical costs also allow the project manager to collect data from a greater number of locations, increasing the sample pool size for selection of more focused samples for traditional laboratory analysis, thus speeding up and improving the site characterization process. One of the field screening methods currently being employed is the immunoassay test kit

0097-6156/96/0646-0254$15.00/0 © 1996 American Chemical Society Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

COAKLEY ET AL. Quality Assurance Indicators for Immunoassay Test Kits 255

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

While immunoassays have been employed by the medical diagnostics industry for years, their applications for the environmental field were not developed until the late 1980s. Numerous immunoassay test kit applications have recently been proposed as draft or have received final approval as part of SW846 methodologies. These particular methods are considered semiquantitative screening methods. It should be noted however, that some manufacturers have developed quantitative assays which may also be employed. The U.S. Environmental Protection Agency (U.S. EPA), Environmental Response Team (ERT), is currently employing immunoassay test kits at a variety of sites. Critical to using these test kits are the identification and application of QA indicators used to establish data of known and acceptable quality. Recent U.S. EPA Superfund Program guidance has established a baseline set of criteria which are applicable when generating data with immunoassay test kits. Major components of this process include development of data quality objectives (DQOs) and preparation of a sitespecific quality assurance program plan (QAPP) to ensure the generation of data of known and acceptable quality. Superfund activities involve the collection, evaluation, and interpretation of site-specific data. As part of Superfund requirements, the U.S. EPA developed and implemented a mandatory QA program with respect to the generation of environmental data. This program also includes a process for developing DQOs. The DQO process is a planning tool which helps site managers determine what type, quantity, and quality of data will be required for environmental decision-making. Guidance on the DQO process is described in "Data Quality Objectives Process for Superfund".(l) This guidance superceded an earlier guidance document which described the DQO process and five associated analytical levels for remedial response activities.(2) Superfund data requirements include the development of DQOs as well as a site-specific QAPP. The overall goal is to generate data of known and acceptable quality. Benefits of developing DQOs and incorporating them into the data generation process include: 1. scientific and legally defensible data collection; 2. establishment of a framework for organizing existing Q A planning procedures; 3. production of specific data quality for specific methods; 4. assistance in developing a statistical sampling design; 5. a basis for defining QA/Quality Control (QC) requirements; 6. reduction of overall project costs; 7. and elucidation of two data categories. Achieving these benefits is contingent upon clearly defining the qualitative and quantitative DQOs that will be applied to the process. Related to these specific DQOs are specific QA/QC requirements. Superfund has developed two descriptive QA/QC data categories: 1. screening data with definitive confirmation, and 2. definitive data. A wide range of analytical methods are available that meet

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

256

ENVIRONMENTAL IMMUNOCHEMICAL METHODS

the requirements of these two data categories. Immunoassay test kits fit into the first category. The screening data with definitive confirmation category shown in the "Data Quality Objectives Process for Superfund", is described as data generated by rapid, less precise analytical methods. It provides analyte identification and quantification, even though the quantification may be imprecise. A rninimum of 10% of the screening data samples must be confirmed by a rigorous analytical method, and QA/QC procedures, typically associated with definitive data. Screening data are not considered data of known quality unless associated with confirmation data. QA/QC elements associated with screening data are summarized in Table 1(1).

T A B L E I. Screening Data Q A / Q C Elements • • • • • •

Sample documentation Chain-of-custody, when appropriate Sampling design approach Initial and continuing calibration Determination and documentation of detection limits Analyte(s) identification and quantification Analytical error determination Definitive confirmation

The definitive data category is described as data generated using rigorous analytical methods, typically EPA-approved reference methods. Definitive methods produce analyte-specific data with confirmation of analyte identity and quantification with tangible raw data output. Definitive data requires determination of analytical or total measurement error. QA/QC elements associated with definitive data include those identified in Table I for screening data, in addition to those elements identified and summarized in Table II. Superfund guidelines require the use of quantitative immunoassays (Table I). Although immunoassay test kits meet the requirements of the screening with definitive confirmation data category, in order for data from test kits to truly fit into this category, some type of analyte quantitation procedure must be employed. Test kit results that simply indicate the presence or absence of an analyte relative to a standard or control sample, do not satisfy the criteria set forth by the Agency. A calibration procedure, preferably with a hard copy output from the instrument, must be performed, accompanied by the appropriate documentation. However, the authors acknowledge that numerous immunoassay test kit methods accepted by the

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

22.

COAKLEY ET AL. Quality Assurance Indicators for Immunoassay Test Kits SW-846 methods manual allow for the semi-quantitative interpretation of results. When using immunoassay test kits for the Resource Conservation and Recovery Act (RCRA) Program, data generated may simply indicate presence and greater than or less than concentrations relative to some predetermined analyte standard(s). However, assuming that the immunoassay test kits can be used as required in the "Data Quality Objectives Process for Superfund"(1), there are a number of confidence indicators that should be evaluated. Quality assurance indicators for immunoassay methods, must be considered indicators of confidence from an overall, method perspective. In general terms, any analytical method may be looked at in terms of a subset of core, methodspecific indicators within a set of generic, overall indicators of confidence. In addition to the generic indicators required by the Agency(l), the authors consider the core indicators specified in Table ΠΙ as necessary in determining overall data quality.

TABLE Π. Definitive Data QA/QC Elements • • • • • • • •

• •



Sample documentation Chain-of-custody required Sampling design approach Initial and continuing calibration Determination and documentation of detection limits Analyte(s) identification and quantification Analytical error determination Definitive confirmation

QC blanks (trip, method, rinsate) Matrix spike recoveries Performance Evaluation (PE) sample, when specified Analytical error or total error determination

The generic indicators of confidence include: blanks, documentation, matrix spikes, calibration standards, sample preparation, representativeness, comparability, confirmation analysis, and replicates. These are QA indicators of confidence that are associated with all analytical methods and must be evaluated in order to determine whether data generated meet QA/QC objectives outlined in the project-specific DQOs generated at the commencement of the project

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

257

258

ENVIRONMENTAL IMMUNOCHEMICAL METHODS

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

Conversely, the core indicators reflect method-specific indicators of confidence which vary based on the analytical method employed for the data generation activity. For immunoassay test kits which employ antibodies as the mode of detection and quantitation, at this point in time, the core indicators of confidence include: temperature, target analyte specificity, interference, moisture content, dilutions, stability, reaction time, and user friendliness. While the authors acknowledge that specificity and interference may be generic indicators, they were included with the core indicators to emphasize their importance relative to immunoassay test kits.

Generic Indicators of Confidence A review of the generic Q A performance indicators shows how these elements apply to any method, and should be evaluated in order to generate data of known and acceptable quality - a main focus of the Superfund Program. T A B L E ΠΙ. Indiceitors of Confidence

Generic Indicators

Core Indicators

Blanks

Temperature

Documentation

Analyte Specificity

Matrix Spikes

Non-analyte Interference

Calibration Standards

Moisture Content

Sample Preparation

Dilutions

Representativeness

Stability

Comparability

Reaction Time

Confirmation Analysis

User Friendliness

Replicates Blanks. Blanks of various types may be included with field sample-collection activities, but must be included with confirmation samples being sent for laboratory analyses. These may include trip, field, method, or rinsate blanks. Data generated from blanks may be used to assess contamination error associated with sample collection, sample preparation, and analytical procedures. Documentation. Sufficient documentation must be maintained for all aspects

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

COAKLEY ET AL. Quality Assurance Indicators for Immunoassay Test Kits 259 of the sample collection and analysis process. Documentation verifies adherence to procedures specified in the site-specific QA plan or documents any deviations from the plan with an explanation for the occurrence.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

Matrix Spikes. Matrix spike results are used primarily to determine matrix interference by calculating the percent recovery (%R) and comparing this value to an established acceptance range. For this reason, it is also an indicator of accuracy. Calibration Standards. Method sensitivity, detection limit, and linearity are evaluated by analyzing calibration standards. Proper calibration procedures ensure accurate results. Sample Preparation. Sample preparation should adhere to established procedures to ensure homogeneity of the sample. This is especially critical for splitting samples or taking replicate aliquots from the same sample. Representativeness. In terms of representativeness, samples collected must adequately characterize the area under investigation. Comparability. In order for data generated to be comparable, sample handling, preparation, and analytical procedures employed for one sample, must be maintained for all samples. Confirmation Analysis. As dictated by EPA guidance, a minimum of 10% of the screened samples must be confirmed by a more rigorous analytical method in order to obtain data of known quality. Confirmation ensures verification of identification and quantitative accuracy by an approved method. Sample preparation may play a major role. Field screening immunoassays employ extraction procedures that may differ greatly from those suggested in the definitive data category. Replicates. And lastly, replicates should be analyzed as an indicator of precision. Results generated are used to assess error associated with sample heterogeneity, sampling methodology, and analytical procedures. All generic indicators must be considered and, depending on the field analysis procedure employed, must be incorporated into site activities.

Core Performance Indicators of Confidence Whereas the generic indicators focus on the overall performance of sampling and analysis, the core indicators are more refined and focus on errors

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

260

ENVIRONMENTAL I M M U N O C H E M I C A L M E T H O D S

associated with the mode of analytical detection. The QA indicators of confidence/error are no longer applicable across the board, rather the indicators become more exacting and precise to the method being performed. Because they employ antibodies, the core indicators determined to be of major significance for immunoassay test kits at this time include: temperature, specificity, interference, moisture content, dilutions, stability, reaction time, and user friendliness. This list may not be all inclusive and is subject to change at any time, depending on manufacturer's modifications to existing products and any futurefindingsby EPA. A clear understanding of these core indicators is essential to accurately interpret data generated by the immunoassay procedure. A subsequent discussion on each indicator follows. Temperature. Both reagents and equipment should be used at ambient temperature. Manufacturer's recommendations include storing the kit and reagents at 4°C to 8°C. Immunoassay reactions are equilibrium reactions and are sensitive to temperature. Therefore, kits should be given enough time to equilibrate to ambient conditions before performing the analysis. Extreme cold decreases the concentration range of the assay, while excessive heat may affect maximum antibody binding ability. A simple thermometer can be used to monitor the temperature; these readings should be documented. A standard practice of allowing reagents and equipment to equilibrate to room temperature for one hour is recommended. For example, never use a standard at ambient temperature with samples that have been refrigerated, and not allowed to equilibrate to ambient temperature prior to analysis.(3) Analyte Specificity. Depending on the particular test kit being utilized, specificity or cross-reactivity, may contribute significantly to the final result Before determining whether a particular kit will provide useful data, the site manager should review the manufacturer's information on other, chemicallysimilar compounds, which the immunoassay kit cannot distinguish from the primary contaminant of concern. For example, if pentachlorophenol (PCP) is the primary contaminant of concern at the site, a PCP kit may be selected. However, if other closely related compounds, such as di- or tri-chlorinated phenols are also present, the kit may not differentiate between PCP and these related compounds. Information provided by manufacturers includes a list of cross-reacting compounds and the concentration required for a positive response. Depending on the analyte and the immunoassay, in order for crossreactivity to be of concern, these chemically similar cross-reactants may need to be present in concentrations that are orders of magnitude greater than the target analyte or may need to be present in just slightly greater concentrations. It is important to have some site background information, prior to determining whether immunoassay test kits meets your particular site data generation

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

COAKLEY ET AL. Quality Assurance Indicators for Immunoassay Test Kits 261 requirements. If one, or some of the cross-reactive compounds are present in significant quantities at the site and immunoassay test kits are used, the end data user should consider the potential impact that the presence of these substances may have on the data, and carefully weigh decisions made based solely on immunoassay data. Using a spiked sample can aid the end user in determining how to interpret the results obtained. Cross-reactivity can also be checked by confirming results with an approved U.S. EPA method. It must be remembered that reliance is being placed on an antibody to detect an analyte. There is no spectrum or chromatogram as proof of identification. Non-analyte Interference. The effect of fuel oil in concentrations greater than 10% in the sample, has not yet been determined. Method 4010(4) indicates that no interference was observed in samples with up to 10% oil contamination. Whenever fuel oil is suspected of being present, regardless of concentration, it is wise to run a matrix spike to check for interference. If interference occurs, use clean-up procedures (e.g., fluorisil or gel permeation chromatography) to eliminate the fuel oil from the extract prior to performing the immunoassay. Moisture Content. Moisture content of samples will vary with the type and location of sample collected. When possible, samples should appear to be dry to rninimize any potential error introduced by the presence of water. If it does not affect the analyte of concern (e.g., volatile organics), samples should be air dried prior to preparation for analysis. However, if this is not possible, currently approved immunoassay methods (Method 4010, 4030, 4031, and 4035)(4) state that up to 30% water in soil had no detectable effect on the resultant analytical data. If the analyte(s) of interest is volatile, such that air drying is not an option, a determination of the percent moisture should be performed and the appropriate correction factor applied to the results. Percent moisture should always be factored into the final results, even if the percent moisture is less than 30%, to ensure data that are comparable to the definitive confirmatory method results, which are generally based on dry weight Laboratory prepared soil samples, spiked to adjust the pH to 2-4 and 10-12, indicated that samples with pH ranging from 3 to 11 had no detectable effect on the performance of the method. If there is reason to suspect very acidic or basic conditions on site, soil pH should be determined prior to immunoassay analysis. Dilutions. For some of the test kits, it is important to accurately dilute the standard and sample extract to the level of interest One will need to perform serial dilutions that can compound error especially when performed by an inexperienced technician. Clearly written procedures for serial dilutions can both document and avoid any dilution errors.

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

ENVIRONMENTAL IMMUNOCHEMICAL METHODS

262

Some test kits may require the preparation of reagents before use. In these cases, the use of clean equipment and measuring devices is crucial. Also, during preparation, solutions must be thoroughly mixed.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

Stability. Test kits should not be used beyond their expiration dates. Typical shelf life for test kits is 12 months with some kits extending to 18 months. Components from one test kit should not be interchanged with components from another kit. Reaction Time. Timing of the immunochemical incubation between individual samples is critical as color intensity is being compared to a standard. Assay drift may occur from deviations in timing of the immunochemical incubation between samples. Immunoassay tests are run in batches that contain standards, controls, and samples. When the procedure uses sequential pipetting steps, there can be a significant difference in the timing between the first and last sample. However, the incubation for all the samples is terminated at the same time by performing the separation steps as a group in a tray or rack. The magnitude of this error depends on the reaction time difference between samples and the rate of the binding reaction. Therefore, it is important to remember to be thorough and consistent throughout the test procedures, and incorporate the use of an electronic timer. (3)

User Friendliness. Although user friendliness may not be a truly measurable indicator of confidence, ease of use related to test kits plays a role in generating quality data. Due to the use of different reagents and dilutions, immunoassay test kits involve many manipulations that can lead to errors. Less steps, reagents, pipettes, and glassware would help to minimize user errors. In addition, test kits require training prior to field use, especially if the individual performing the immunoassay procedure is not an experienced chemist Training should include a good understanding of the principles behind immunoassay and familiarity with steps involved in conducting the test, such as pipetting procedures. It is highly recommended that anyone performing the immunoassay procedure receive training from an experienced co-worker or directly from the manufacturer, if available. If the manufacturer does not provide training, a dry run through the test kit procedure should be completed prior to field activities. All potential users should participate in this training, and all training should be documented. To decrease the error due to operator variability, it is recommended that one operator complete all procedures associated with a particular batch.

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

COAKLEY ET AL. Quality Assurance Indicators for Immunoassay Test Kits 263

Current Status of ERT Activities Relative to Test Kits

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

Currently, ERT is i n the process of developing an Immunoassay Technical Information Bulletin. This bulletin will contain general information on immunoassay techniques, equipment/apparatus, sample preparation, documentation and reporting, QA/QC criteria, interferences and potential sources of error, and limitations. In addition, standard operating procedures (SOPs) have been developed for test kits routinely used by ERT.

Conclusions Immunoassay methods have great potential for field analyses. Ideally antibodies should be analyte specific; methods should be capable of detecting analytes in low parts per billion concentrations; and kits should be usable with complex physical and chemical matrices. However, at this time, the present test kits do not have all these characteristics. Some methods are not analytespecific, but instead may react with analytes of the same class or functional group; the kits can detect ppb levels if the solvent (usually dilute methanol) can extract the analyte efficiently. In addition, complex matrices such as soils impregnated with fuel oils, tars, and other organic material can interfere with the antibody binding activity. When considering QA indicators of confidence for the test kits, the user should consider both generic and core indicators. Generic indicators of confidence are those requirements which are common to all analytical datageneration methods. These include: blanks, documentation, matrix spikes, calibration standards, sample preparation, representativeness, comparability, confirmation analysis, and replicates. Core indicators are those methodspecific requirements established just for the immunoassay test kits. Criteria must be included as part of the Q A evaluation when determining overall quality of the data. These include: temperature, analyte specificity, nonanalyte interference, moisture content, dilutions, stability, reaction time, and user friendliness. Clearly, as test kits are refined, these core indicators may be modified. It is the combination of generic and core indicators that is necessary for data to be in compliance with Superfund Program requirements for generating data of known and acceptable quality. An ideal immunoassay method would have the following attributes: 1. direct use in the field with no need for an on-site lab; 2. capability of analyte quantification; 3. utilization of calibration curve; 4. minimal steps (i.e., extract, react, and measure concentration); 5. no dependence on reaction timing; 6. greater method specificity; 7. a self-contained "black box" that incorporates a measurement detector; 8. minimal interference from matrix effects; and 9. greater efficiency in analyte extraction from solid matrices. Of course an immunoassay "dip stick" method, calibrated to specific

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.

E N V I R O N M E N T A L I M M U N O C H E M I C A L M E T H O D S

264

concentration ranges, similar to sugar in urine test kits would be widely welcomed for field use.

Downloaded by NATL UNIV OF SINGAPORE on November 8, 2017 | http://pubs.acs.org Publication Date: October 23, 1996 | doi: 10.1021/bk-1996-0646.ch022

In searching for this ideal method, the ERT is presently evaluating other means of utilizing antibodies. These methods include electrochemical andfiberoptic techniques which incorporate most of the attributes mentioned above. Literature Cited 1.

U.S. EPA,"Data Quality Objectives for Superfund - Interim Final Guidance," EPA540-R-93-071, U.S. Environmental Protection Agency, Washington DC, 1993.

2.

U.S. EPA. "Data Quality Objectives for Remedial Response Activities," EPA 540-G-87/003, U.S. Environmental Protection Agency, Washington DC, 1987.

3.

Hayes, MaryC.,Joseph X. Dautlick, and David P. Herzog. (1993). "Quality Control of Immunoassays for Pesticide Residues," Ohmicron Corporation.

4.

U.S. EPA. "Test Methods for Evaluating Solid Waste (SW-846)" Methods 4010, 4030, 4031, and 4035, U.S. Environmental Protection Agency, Washington DC, 1995.

Van Emon et al.; Environmental Immunochemical Methods ACS Symposium Series; American Chemical Society: Washington, DC, 1996.