Thursday, July 31, 2014

Infectious Disease Assay Development: Choosing the Appropriate External Controls

Cara N. Wilder, Ph.D.

During the development of a molecular-based assay for infectious disease research, or when using a pre-qualified assay or sequencing tool, it is important to select appropriate external controls to evaluate and verify the performance of each process. This testing is imperative in tracking drift and run-to-run variation within a procedure. In this third of three articles, we will discuss the importance of choosing the appropriate external controls, and will provide information on how to select the appropriate cultures and nucleic acids for your tests.

There are a number of different types of external controls that should be employed as part of your good laboratory practices when developing, validating, or evaluating a novel molecular-based assay or tool. These controls are positive or negative references that are treated in parallel with test specimens to verify technical performance and interpret the quality of data. When used properly, external controls can both confirm that a test is performing correctly as well as help identify problems in the event of a test failure.

External controls can be used to test a number of sources of variability, including sample collection, nucleic acid extraction procedures, sample preparation, and data acquisition. For example, let’s say you are evaluating a quantitative real-time PCR assay for the detection of a specific pathogen. When processing each batch of samples, you would want to include external controls that represent the strains of the targeted pathogen. These control samples should be prepared, extracted, and tested in the exact same manner as each sample. Subsequently, the derived results from the control samples during each stage of your procedure should then be analyzed prior to examining the sample results. If the assay does not perform as expected, all results for each of the samples should be considered invalid, and the assay re-run.

The difficulty in obtaining and employing the ideal controls lies in how reliable and suitable it is for a particular assay. A control that may work for one type of assay or platform may not necessarily work for another. For this reason, it is essential that the external controls used are optimized for the specific assay or platform being tested. To aid in assay validation, ATCC offers an expansive array of authenticated cultures and nucleic acid preparations for use as external controls in nucleic acid extraction, process verification, amplification, and proficiency testing. Each of these products are prepared as high-quality, authenticated materials backed by meticulous quality control procedures, making them ideal as external controls for process validation.

Overall, choosing the ideal external control is critical in the evaluation, verification, and validation of novel assays or tools. Through the use of appropriate authenticated strains and nucleic acids, run-to-run variation, sample preparation, and assay execution can be properly analyzed.

Thursday, July 17, 2014

Infectious Disease Assay Development: Determining the Limit of Detection

Cara N. Wilder, Ph.D.

Determining the detection limit is an essential part of infectious disease assay development and design. In this second of three articles, we will discuss the importance of determining the detection limit in establishing analytical sensitivity, and will provide information on how to establish this parameter when evaluating your experimental design.

During the development of an assay or diagnostic method used to determine the presence of a specific pathogen, it is important to establish how effectively the assay can detect lower concentrations of the target strain – particularly if the strain has a low infectious dose. This critical part of infectious disease assay development is often termed the limit of detection (LOD), and can be defined as the minimum amount of the target strain or DNA sequence that can be reliably distinguished from the absence of the sample within a given level of confidence (ex. 95% confidence level).

The methods used to establish LOD can vary depending on assay type and use. For example, the LOD of a particular instrument-based system is measured with either a pure culture or nucleic acid sample. In contrast, when analyzing clinical or environmental LOD, quantified samples are spiked into an appropriate matrix (e.g. soil, water, blood, feces) and are then analyzed following various recovery and concentration procedures. Compared to determining an instrument LOD, examining clinical or environmental LOD is often associated with a number of challenges including the potential for environmental inhibitors, loss of the organism, or the presence of impurities. At each step of the recovery process, there is the potential for sample loss, which directly affects the LOD; thus, for these types of assays, improving process efficiency is imperative for ensuring assay sensitivity.

When analyzing the analytical sensitivity of an assay, the significance of your results can be dependent on the dilution range used as well as the number of replicates. Prior to your analysis, it is important to first quantify your samples, or obtain authenticated samples with a pre-established concentration. Following the quantification of your control samples, each sample should be serially diluted around an appropriate concentration that was previously determined through a range finding study. Depending on the assay, the dilution series may vary in the number of dilutions used (i.e. the number of samples) as well as the extent of the dilution (i.e. 2x, 5x, 10x, etc.). The closer you can get the dilution series around your target concentration, the more accurately you will be able to determine your LOD. Once your dilution series is prepared, each dilution should be tested against your assay in replicate (at least 20-60 times).

For example, let’s say you wanted to develop an end-point PCR-based approach for identifying Clostridium difficile in stool samples. When analyzing the LOD of your assay, you would first want to acquire strains representing the major known toxinotypes, and then quantify the concentration of each culture preparation. Following a range finding study, you would then prepare an appropriate dilution series for the samples and spike each dilution into a stool sample. Following suitable recovery and concentration procedures, at least 20 replicates for each dilution should be tested for identification by your PCR-based system as well as confirmed by colony counting.

When obtaining strains for determining the limit of detection, it is important to go to a reliable source that provides authenticated reference standards that are titered or quantitated. This will ensure that your strains are well-characterized, as well as accurately quantified for concentration or genome copy number. At ATCC, we maintain a portfolio that expands a vast variety of microorganisms and nucleic acids that are quantified by commonly used methods including PicoGreen®, RiboGreen®, Droplet Digital™ PCR, spectrophotometry, or culture-based approaches. Moreover, ATCC Genuine Cultures® and ATCC® Genuine Nuleics are fully characterized using a polyphasic approach to establish identity as well as confirm characteristic traits, making them ideal for determining the detection limit of your assay.

Overall, determining the detection limit is critical in assay development and validation. Through the use of a diverse array of authenticated strains and nucleic acids that are accurately quantified, assay sensitivity can be established.



PicoGreen® and RiboGreen® are registered trademarks of Invitrogen. Droplet Digital™ is a trademark of Bio-Rad Laboratories, Inc.