Thursday, July 31, 2014

Infectious Disease Assay Development: Choosing the Appropriate External Controls

Cara N. Wilder, Ph.D.

During the development of a molecular-based assay for infectious disease research, or when using a pre-qualified assay or sequencing tool, it is important to select appropriate external controls to evaluate and verify the performance of each process. This testing is imperative in tracking drift and run-to-run variation within a procedure. In this third of three articles, we will discuss the importance of choosing the appropriate external controls, and will provide information on how to select the appropriate cultures and nucleic acids for your tests.

There are a number of different types of external controls that should be employed as part of your good laboratory practices when developing, validating, or evaluating a novel molecular-based assay or tool. These controls are positive or negative references that are treated in parallel with test specimens to verify technical performance and interpret the quality of data. When used properly, external controls can both confirm that a test is performing correctly as well as help identify problems in the event of a test failure.

External controls can be used to test a number of sources of variability, including sample collection, nucleic acid extraction procedures, sample preparation, and data acquisition. For example, let’s say you are evaluating a quantitative real-time PCR assay for the detection of a specific pathogen. When processing each batch of samples, you would want to include external controls that represent the strains of the targeted pathogen. These control samples should be prepared, extracted, and tested in the exact same manner as each sample. Subsequently, the derived results from the control samples during each stage of your procedure should then be analyzed prior to examining the sample results. If the assay does not perform as expected, all results for each of the samples should be considered invalid, and the assay re-run.

The difficulty in obtaining and employing the ideal controls lies in how reliable and suitable it is for a particular assay. A control that may work for one type of assay or platform may not necessarily work for another. For this reason, it is essential that the external controls used are optimized for the specific assay or platform being tested. To aid in assay validation, ATCC offers an expansive array of authenticated cultures and nucleic acid preparations for use as external controls in nucleic acid extraction, process verification, amplification, and proficiency testing. Each of these products are prepared as high-quality, authenticated materials backed by meticulous quality control procedures, making them ideal as external controls for process validation.

Overall, choosing the ideal external control is critical in the evaluation, verification, and validation of novel assays or tools. Through the use of appropriate authenticated strains and nucleic acids, run-to-run variation, sample preparation, and assay execution can be properly analyzed.

Thursday, July 17, 2014

Infectious Disease Assay Development: Determining the Limit of Detection

Cara N. Wilder, Ph.D.

Determining the detection limit is an essential part of infectious disease assay development and design. In this second of three articles, we will discuss the importance of determining the detection limit in establishing analytical sensitivity, and will provide information on how to establish this parameter when evaluating your experimental design.

During the development of an assay or diagnostic method used to determine the presence of a specific pathogen, it is important to establish how effectively the assay can detect lower concentrations of the target strain – particularly if the strain has a low infectious dose. This critical part of infectious disease assay development is often termed the limit of detection (LOD), and can be defined as the minimum amount of the target strain or DNA sequence that can be reliably distinguished from the absence of the sample within a given level of confidence (ex. 95% confidence level).

The methods used to establish LOD can vary depending on assay type and use. For example, the LOD of a particular instrument-based system is measured with either a pure culture or nucleic acid sample. In contrast, when analyzing clinical or environmental LOD, quantified samples are spiked into an appropriate matrix (e.g. soil, water, blood, feces) and are then analyzed following various recovery and concentration procedures. Compared to determining an instrument LOD, examining clinical or environmental LOD is often associated with a number of challenges including the potential for environmental inhibitors, loss of the organism, or the presence of impurities. At each step of the recovery process, there is the potential for sample loss, which directly affects the LOD; thus, for these types of assays, improving process efficiency is imperative for ensuring assay sensitivity.

When analyzing the analytical sensitivity of an assay, the significance of your results can be dependent on the dilution range used as well as the number of replicates. Prior to your analysis, it is important to first quantify your samples, or obtain authenticated samples with a pre-established concentration. Following the quantification of your control samples, each sample should be serially diluted around an appropriate concentration that was previously determined through a range finding study. Depending on the assay, the dilution series may vary in the number of dilutions used (i.e. the number of samples) as well as the extent of the dilution (i.e. 2x, 5x, 10x, etc.). The closer you can get the dilution series around your target concentration, the more accurately you will be able to determine your LOD. Once your dilution series is prepared, each dilution should be tested against your assay in replicate (at least 20-60 times).

For example, let’s say you wanted to develop an end-point PCR-based approach for identifying Clostridium difficile in stool samples. When analyzing the LOD of your assay, you would first want to acquire strains representing the major known toxinotypes, and then quantify the concentration of each culture preparation. Following a range finding study, you would then prepare an appropriate dilution series for the samples and spike each dilution into a stool sample. Following suitable recovery and concentration procedures, at least 20 replicates for each dilution should be tested for identification by your PCR-based system as well as confirmed by colony counting. If the concentration for your strains at which ≥95% of the replicates were detected by the PCR-based system resulted in 340 cfu/mL, 250 cfu/mL, 60 cfu/mL, and 430 cfu/mL, it would indicate that the overall limit of detection for your assay is 430 cfu/mL organisms present in stool.

When obtaining strains for determining the limit of detection, it is important to go to a reliable source that provides authenticated reference standards that are titered or quantitated. This will ensure that your strains are well-characterized, as well as accurately quantified for concentration or genome copy number. At ATCC, we maintain a portfolio that expands a vast variety of microorganisms and nucleic acids that are quantified by commonly used methods including PicoGreen®, RiboGreen®, Droplet Digital™ PCR, spectrophotometry, or culture-based approaches. Moreover, ATCC Genuine Cultures® and ATCC® Genuine Nuleics are fully characterized using a polyphasic approach to establish identity as well as confirm characteristic traits, making them ideal for determining the detection limit of your assay.

Overall, determining the detection limit is critical in assay development and validation. Through the use of a diverse array of authenticated strains and nucleic acids that are accurately quantified, assay sensitivity can be established.


Tuesday, June 24, 2014

Infectious Disease Assay Development: Establishing Inclusivity/Exclusivity

Cara N. Wilder, Ph.D.

Optimizing experimental conditions during assay development can be challenging, particularly with respect to establishing analytical sensitivity (including limits of detection (LoD)) and specificity, as well as identifying and employing the appropriate external controls. In this first of three articles, we will discuss the importance of inclusivity/exclusivity in validating assay sensitivity and specificity, and will provide information on how to establish these parameters when evaluating your experimental design.

Assay sensitivity and specificity are often described using the terms inclusivity and exclusivity, but what do these terms actually mean? Depending on whether your assay is culture- or molecular-based, inclusivity can be defined as the percentage of target microbial strains or DNA samples that give the correct positive result. In contrast, exclusivity can be defined as the percentage of non-target microbial strains or DNA samples that give the correct negative result. For example, if you are developing an assay for the detection of Staphylococcus aureus in clinical samples, you would want to ensure that your assay is inclusive for each of the different S. aureus subspecies while being exclusive for other related species or non-related genera such as Staphylococcus epidermidis or Escherichia coli, respectively.

Establishing ideal inclusivity/exclusivity parameters is an essential part of assay validation, particularly when evaluating diagnostic and epidemiological assays whose results can affect public health. In many cases, the rapid and accurate identification of an infectious pathogen is critical for the timely administration of appropriate therapeutic agents as well as the prevention of transmission. Thus, to ensure the precision of your diagnostic assay, choosing a suitable sample size of the appropriate representative strains or nucleic acids is imperative.

Determining which strains to choose for inclusivity/exclusivity testing can be a daunting task. Prior to selecting your test strains, it is important to know basic information about your target organism so that it can be applied in the development of your inclusivity and exclusivity testing panels. For inclusivity testing, the use of microbial or nucleic acid panels that encompass common strain variants as well as those representing all known subspecies of the target organism is recommended. In contrast, exclusivity can be established and evaluated through the use of cross-reactivity panels that include genetically related species that are in the same genus or family, genera that share an environmental or clinical niche with the target organism, and microbial species commonly observed in the test sample.

For instance, let’s say you developed a molecular-based diagnostic assay for the detection of Klebsiella pneumoniae in respiratory infections and you wanted to evaluate its sensitivity and specificity. First, you would want to gather a bit of background on this microbial species. Based on previous studies, this particular bacterium has been commonly found in the normal flora of the mouth, skin, and intestines, and can cause respiratory and urinary tract infections in immunologically compromised individuals. Moreover, K. pneumoniae is a significant member of the Enterobacteriaceae family, is related to at least three other species in the Klebsiella genus, and comprises three known subspecies. With this in mind, you would want to ensure that your inclusivity testing panel included nucleic acids isolated from strains representing the three known K. pneumoniae subspecies as well as strain variants frequently isolated from clinical samples. For your exclusivity panel, you would want to include nucleic acids isolated from strains representing other known Klebsiella species (e.g. K. granulomatis, K. oxytoca, K. terrigena), isolates that share the same clinical and natural niches as K. pneumoniae (e.g. E. coli, Citrobacter spp., Proteus spp., etc.), and other organisms commonly found in clinical respiratory samples from both healthy and immunologically compromised patients (e.g. Pseudomonas aeruginosa, Burkholderia cepacia, Streptococcus pneumoniae, etc.).

In addition to choosing the appropriate strains, having a large sample size is important in determining the significance of your experimental results. Using the example above, let’s say that your inclusivity panel included 50 strains encompassing common K. pneumoniae strain variants and representatives of the three known subspecies, and your exclusivity panel included 100 strains encompassing related, non-target microbial strains. If your assay was able to accurately detect 49 of the 50 inclusivity strains, it would indicate that the test has 98% sensitivity for the sample set analyzed. If your assay was unable to detect 95 of the exclusivity strains, it would indicate that the test has 95% specificity for the sample set analyzed. Taking this data into account, along with other factors such as sample size, the statistical likelihood of false positives or false negatives, etc., you could infer that there is a high probability that the test would accurately detect the presence of K. pneumoniae if a patient was infected with it and that there is a high probability that the test would give a negative result if the patient was well or was infected with a different microbial species.

When obtaining strains for analytical sensitivity and specificity testing, it is important to go to a reliable source that provides authenticated reference standards. This will ensure that your strains are accurately identified down to the species or strain level, as well as functionally characterized for any important traits such as serotype, toxin production, drug-resistance, or clinical relevance, including newly emerging subtypes. Currently, biological reference standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, maintains a portfolio that encompasses a vast variety of relevant strains, variants, and nucleic acids. Moreover, ATCC Genuine Cultures® are fully characterized using genotypic, phenotypic, and functional analyses to establish identity as well as confirm characteristic traits, making them ideal for inclusivity/exclusivity validation studies.

Overall, ensuring sensitivity and specificity is critical in assay development and validation. Through the use of a diverse array of authenticated, highly characterized strains that represent your target organism or non-target species, assay sensitivity and specificity can be established.

Friday, April 4, 2014

The Need for Tuberculosis Reference Standards in Vaccine Development

Cara N. Wilder, Ph.D.

Tuberculosis (TB), a highly contagious respiratory disease caused by the bacterium Mycobacterium tuberculosis, annually results in over two million deaths worldwide1. In the United States alone, the Centers for Disease Control and Prevention (CDC) reported a total of 9,588 new cases of TB in 2013; of which, 86 cases were multidrug-resistant. This infection is commonly spread by the aerosolization of the bacteria via coughing, sneezing, speaking, or singing. Clinical symptoms of TB include chronic cough with blood-tinged sputum, fever, weight loss, and the formation of tubercules in the lungs1.

Mycobacterium tuberculosis
To prevent the spread of TB in endemic countries, the Bacillus Calmette–Guérin (BCG) vaccine is used. This vaccine, which was first introduced in 1921, is derived from an attenuated live bovine tuberculosis bacillus, Mycobacterium bovis, which is non-virulent in humans. Following its introduction into the World Health Organization Expanded Programme on Immunization in 1974, use of the BCG vaccine has reached global coverage rates of >80% in countries where TB is prevalent2.

On average, the BCG vaccine has been found to reduce the risk of TB by 50%, with estimates of protection ranging from 0-80%3. Moreover, it does not prevent primary infection or the reactivation of latent pulmonary infection2. These variations in vaccine efficacy have been attributed to a wide range of factors, including genetic or nutritional differences between populations, environmental influences,  exposure to other microbial infections, or the methods used to prepare the vaccine3. Overall, the impact of the current BCG vaccine on reducing the transmission of TB is limited.

In recent years, the genomic plasticity of BCG vaccine strains was offered as another possible explanation for variable efficacy4. In the early years of vaccine development, prior to the introduction of archival seed lots, vaccine strains were maintained by serial passaging. Following the implementation of proper cold-chain maintenance procedures, several different BCG seed strains were preserved for use in vaccine development. However, by that point, years of subculturing ultimately resulted in significant differences in the genomes of each strain; where, comparative genomics have uncovered deletions, insertions, and single nucleotide polymorphisms that may have contributed to diminished vaccine efficacy5.

To help control for the intrinsic differences between BCG strains, the use of a single biological standard in vaccine development should be considered. Generally, a biological standard is defined as a well-characterized, authenticated, purified biological reference material – qualities of which are essential in minimizing variations between vaccine preparations. Through the use of a single, minimally-passaged M. bovis standard, one of the contributing factors affecting vaccine efficacy can be accounted for; thus, potentially improving the quality of the vaccine preparation and ensuring that all recipients are receiving the best possible protection against TB.

Currently, biological standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, offers a number of M. bovis strains, including those known to demonstrate resistance to isoniazid. Each of these ATCC® Genuine Cultures is fully characterized using genotypic, phenotypic, and functional analyses to establish identity. Moreover, each strain is carefully preserved as low passaged stocks using a seed stock system to minimize subculturing and maintain the original culture characteristics.

Overall, there have been a number of factors attributed to the variations seen in BCG vaccine efficacy. Through the use of a single, consensus, biological standard that demonstrates high levels of protection when used in vaccine development, manufacturers can come one step closer to improving BCG vaccine performance.

 
 
References
  1. CDC. Tuberculosis (TB), <http://www.cdc.gov/tb/> (2013).
  2. WHO. BCG Vaccine, <http://www.who.int/biologicals/areas/vaccines/bcg/en/> (2014).
  3. Fine, P. E. Variation in protection by BCG: implications of and for heterologous immunity. Lancet 346, 1339-1345 (1995).
  4. Brosch, R. et al. Genome plasticity of BCG and impact on vaccine efficacy. Proc Natl Acad Sci U S A 104, 5596-5601, doi:10.1073/pnas.0700869104 (2007).
  5. Behr, M. A. et al. Comparative genomics of BCG vaccines by whole-genome DNA microarray. Science 284, 1520-1523 (1999).

 

Friday, January 24, 2014

Biological Standards in Life Sciences – Enhancing Reproducibility

Cara N. Wilder, Ph.D.

With tremendous breakthroughs being made in the life sciences every day, it is critical that reported and published data are not only reliable and accurate, but reproducible as well. Unfortunately, with the inherent variability of biological materials and reagents, as well as the differences in analytical techniques and data reporting, irreproducibility is still a universal problem throughout both commercial and academic settings. In fact, this issue has resulted in significant, long-lasting effects including extensive losses in time and funding, perpetuation of false data, reputational damage, and impairment of professional relationships and collaborations. Here, we will briefly discuss how variations in biological materials can affect reproducibility and how the use of organisms as standards can help counteract these effects.

 Within the life sciences, irreproducibility can be caused by a number of underlying factors ranging from the biological materials and reagents used for a set of experiments to how the experiment is performed and analyzed. For instance, bacteria can vary quite a bit at the species level, which is why we often see further characterization at the subspecies and strain level. Within any given bacterial species, representative strains will exhibit similar phenotypic and genotypic traits, such as characteristic morphologies, similar metabolic requirements, and conserved ribosomal 16S sequences. However, as similar as these strains may be, selective and societal pressures can lead to genetic mutations or the acquisition of laterally transferred genetic elements that may result in significant phenotypic changes such as variations in serotype, intracellular signaling, protein expression, pathogenicity, or drug-resistance. In turn, this can affect experimental reproducibility between research facilities that are not using the same variant. Overall, the inherent differences of biological materials bring unique challenges to establishing reliable assays.

To help control for the intrinsic differences between the strains used within the life sciences, and thus enhance experimental reproducibility, the use of biological standards is recommended. Biological standards are defined as well-characterized, authenticated, purified biological reference materials – qualities of which are essential for their effective use in assay validation and calibration, research and development, diagnostics, etc. For example, when testing consumable or pharmaceutical products for specific microbial pathogens, the use of appropriate biological reference materials can ensure that the assay is sensitive and precise enough to detect the presence of objectionable microbial contaminants. The use of biological standards is equally important in clinical settings for the detection and identification of infectious agents. As you can image, the sensitivity and specificity of these assay can have a profound effect on public health.

Currently, biological standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, produces both animal cell lines and microorganisms as Certified Reference Materials (CRMs). These biological standards are produced under an ISO Guide 34:2009 accreditation -- a process that offers confirmed identity, well-defined characteristics, and an established chain of custody. Moreover, ATCC CRMs are stable with respect to one or more specified properties, which makes them ideal for performing in challenge assays, verifying or comparing test methods, and benchmarking critical assay performance during assay validation or implementation.

Overall, the inherent variability of biological materials can significantly affect the quality and reproducibility of data. Through the use of standardized biological reference materials, assay consistency and accuracy can be improved.

Thursday, December 5, 2013

Eradicating Helicobacter pylori infection

Cara N. Wilder, Ph.D.

Helicobacter pylori is a Gram-negative, microaerophilic bacterium known to inhabit the stomach lining of at least 50% of the human population. This pathogen is transmitted to humans through the consumption of contaminated food and water, as well as through direct contact with infected individuals. Once ingested, H. pylori will colonize the surface of stomach epithelial cells, resulting in either asymptomatic carriage or complications including chronic gastritis, peptic ulcers, or stomach cancer.

The most common treatment of H. pylori infections is through the use of a triple regimen that combines two antibiotics (clarithromycin, amoxicillin) and a proton pump inhibitor (PPI). Though this treatment is effective in most patients, recent reports have indicated that successful bacterial eradication is decreasing due to the emergence of clarithromycin-resistant strains. To aid in the treatment of these antibiotic-resistant infections, many clinicians have turned toward the use of second-line treatments such as a bismuth-containing quadruple therapy (EBMT) or a moxifloxacin-containing triple therapy (MEA). However, little is known regarding the efficacy of these second-line therapies.

In a recent study, Kim et al. sought to evaluate the rate of H. pylori reinfection following EBMT or MEA treatment. In this analysis, 648 patients who failed bacterial eradication through the standard triple therapy were treated with either the EBMT or the MEA second-line therapies. At four weeks following treatment, patients were examined for H. pylori colonization through either the 13C urea breath test or by invasive analysis. From this investigation, the annual reinfection rate from EBMT and MEA treatment was found to be 4.45% and 6.46%, respectively. Overall, the long-term reinfection rate of H. pylori remained low following both second-line treatments, suggesting that there is no significant evidence that reinfection of H. pylori is related to the eradication program.




Wednesday, November 27, 2013

Prevention of Toxoplasmosis

Histopathology of active toxoplasmosis of myocardium.
Photo courtesy of  EP Ewing Jr. and CDC
Cara N. Wilder, Ph.D.

Toxoplasma gondii is a ubiquitous obligate, intracellular parasitic protozoan known to cause toxoplasmosis in a number of warm-blooded animals, including humans. This protist is transmitted to humans through the consumption of undercooked meat from infected animals, the ingestion of food or water contaminated with oocytes from infected cat feces, or by transplacental transmission. In healthy individuals, toxoplasmosis is relatively asymptomatic and self-limiting. However, this illness can silently affect pregnant women and result in severe consequences for the fetus including neurological impairment, chorioretinitis, or death. T. gondii can also affect immune-compromised individuals, resulting in cerebral or extra-cerebral toxoplasmosis.

Currently, the CDC considers toxoplasmosis to be one of the leading causes of death attributed to foodborne illness. Moreover, T. gondii infection in domestic animals represents a significant economic and public health threat due to the potential for foodborne outbreaks. Unfortunately, treatment of toxoplasmosis is difficult due to both the severe side-effects of the drug as well as the potential for re-infection. Thus, the development of effective preventative treatments is of great importance.

In recent study, Wang et al. analyzed the protective efficacy of recombinant T. gondii protein disulfide isomerase (PDI) as a potential target for the development of a novel vaccine. This particular antigen was chosen as a candidate vaccine target as it is soluble, demonstrates conserved homology among the three distinct clonal lineages of T. gondii strains, and is highly expressed on the outer surface of T. gondii tachyzoites. In this study, BALB/c mice were intranasally immunized with varying concentrations of recombinant T. gondii PDI (rTgPDI), and the resulting immunological response was evaluated by lymphoproliferative assays and by cytokine and antibody measurements. In addition to this analysis, immunized mice were also challenged with tachyzoites from T. gondii strain RH. Following this challenge, the survival time of the mice was assessed and the number of brain and liver tachyzoites enumerated.

From these analyses, the group discovered that immunization with 30 µg of rTgPDI demonstrated higher levels of anti-PDI antibody production, a strong lymphoproliferative response, and high levels of cytokine production as compare to the other doses tested. Further, mice immunized with rTgPDI demonstrated enhanced survival times and reduced levels of tachyzoites as compared to control mice. Overall, the results from the study demonstrated that immunization with rTgPDI elicited a protective immune response against T. gondii tachyzoites, thus suggesting that this recombinant protein may be a promising candidate for the development of a vaccine to prevent toxoplasmosis.