Tuesday, September 23, 2014

Current techniques used in the quality control of culture media required for pharmaceutical microbiology

Cara N. Wilder, Ph.D.

Culture media is a basic tool of microbiology, supporting most microbiological assays such as propagation, obtaining pure cultures, enumerating cells, preservation, and selecting microorganisms. For pharmaceutical quality control laboratories, media is frequently used for environmental monitoring, sterility testing, and microbial enumeration tests. As such, the quality of microbial-based tests greatly depends on the quality of the culture media; if the media is not the right quality, it undermines all of the tests that the media is used for. Therefore, safeguarding the quality of culture media through routine quality control testing can help ensure reliable, consistent microbiological test results.

Culture media are traditionally defined as substances that encourage the growth, support, and survival of microorganisms. This is achieved through the preparation of liquid broth or solid agar media comprising reagents required to support microbial growth, including basic nutrients, energy sources, growth factors, minerals, buffer salts, and metals1. To help minimize lot-to-lot variability, culture media manufacturers often attempt to standardize the preparation of media; however, there are frequently unavoidable differences in raw materials from natural sources, intra-lab skill level, or the storage and shipping conditions of the media. To help counter this, culture media should be physically inspected for color, clarity, damage, pH, and gel strength following all preparation steps. Moreover, media that has not been quality control tested or assessed should be quarantined to prevent premature use.

In addition to the physical inspection of culture media, it is important to analyze media for microbiological characteristics. As these media are frequently used for the analysis of sterility throughout the manufacturing process, or for the detection of objectionable microorganisms, it is imperative that the media is not only sterile, but able to promote the growth of any possible contaminants. For the analysis of media sterility, uninoculated media is commonly incubated. Here, the growth temperature and incubation period will depend on the type of media being analyzed. Media exhibiting the absence of growth following the recommended incubation period are considered sterile. In contrast, growth promotion testing is one of the most important quality control tests performed on media and is used to determine if the media in question is able to promote and sustain growth. The primary objective of the test is to detect if a new batch of media is functional and of the same standard as the most recent batch of media tested, as well as ensures the consistent use of standardized media between labs.

Currently, there are several approaches that can be taken for testing media for growth promotion. For agar-based media, a simple method of analysis is to serially dilute microbial strains and plate them on test media using the spread plate technique. These agar plates are then compared to the growth characteristics of a control plate, which is a batch of media that has been previously assessed for growth promotion and approved for use. Two other, more robust, approaches include the Miles-Misra and the ecometric techniques2. The Miles-Misra method is a quantitative technique used to determine the surface viable count. Here, droplets of titered microbial suspensions are dropped onto plates and allowed to spread naturally; the test plate is compared to a control plate following incubation. Colonies are counted in the sector where the highest number of discrete colonies can be enumerated. The results of the assay are examined using a productivity ratio, which is equivalent to the mean of the test plates divided by the mean of the comparative control plates. In this case, an acceptable productivity ratio should fall within 0.5-2, which is equivalent to 50%-200% recovery. In contrast, the ecometric method is a semi-quantitative approach that involves streaking a loopful of a microbial suspension onto four quadrants of an agar plate so that the inoculum is diluted with each streak. In this method, growth should occur in each of the quadrants.

For liquid broth culture media, growth promotion testing typically involves inoculating the media with an estimated number of microorganisms (< 100 colony forming units) and observing it for turbidity within a required time. For bacterial and fungal cultures, this is typically 3 and 5 days, respectively. These inoculated test cultures are compared to control batch media, and the inoculum level is verified via plate count. Both the test media and the control media must show turbidity for the analysis to be considered successful.

For each media type, an appropriate panel of microorganisms is required in order to demonstrate the suitability of the media for the required test. Depending on the required use of the media, a suitable microbial panel may differ in the microbial strains selected as well as how many strains comprise the panel. Generally, the pharmacopeia recommends a preselected list of specific microorganisms for each chapter or general test that must be traceable to a reputable culture collection, such as ATCC. For example, a standard set of cultures may include Staphylococcus aureus subsp. aureus (ATCC® 6538), Bacillus subtilis subsp. spizizenii (ATCC® 6633), Pseudomonas aeruginosa (ATCC® 9027), Clostridium sporogenes (ATCC® 19404), Candida albicans (ATCC® 10231), Aspergillus brasiliensis (ATCC® 16404), Escherichia coli (ATCC® 8739), and Salmonella enterica subsp. enterica serovar Typhimurium (ATCC® 13311). In addition to the recommended set of cultures, isolates that are commonly found in the manufacturing environment are commonly used for media testing as well. For example, media used for the analysis of clean room sterility are often tested for the growth promotion of microorganisms commonly found in clean rooms, such as Staphylococcus, Corynebacterium, Micrococcus, Bacillus species, and common skin microflora.

ATCC Genuine Cultures® are maintained using the seed lot system recommended by the United States Pharmacopeia (USP) General Chapter, Microbiological Best Laboratory Practices <1117>. Moreover, each ATCC Genuine Culture® has been thoroughly authenticated and characterized using a polyphasic approach comprising genotypic and phenotypic analyses. To conserve the characteristics of ATCC Genuine Cultures®, it is recommended that each laboratory has a seed lot system in place for preserving and maintaining reference cultures; these cultures must be handled carefully at all times to avoid genetic drift, phenotypic changes, contamination, and strain damage.

Overall, microbial culture media is an important tool used in the pharmaceutical quality control process. As the quality and functionality of the media directly affects its use in microbiological assays, it is imperative that it is thoroughly tested for quality, sterility, and growth promotion prior to its use. This can be analyzed through the use of fully characterized ATCC Genuine Cultures®. All ATCC Genuine Cultures® are maintained and authenticated in accordance with the USP Microbiological Best Laboratory practices, ensuring reliable results of microbial assays.


References

1.      Bridson E, Brecker A. Design and Formulation of Microbiological Culture Media. In J.R. Noris and D.W. Ribbons (Editors). Methods in Microbiology, Volume 3A, London: Academic Press, 1970.

2.      Mossel DAA, et al. Quality control of solid culture media: a comparison of the classic and the so-called ecometic technique. J Appl Bacteriol 49: 439-454, 1980.

Tuesday, September 9, 2014

The importance of authenticated quality control strains in supporting guidance for industry

Cara N. Wilder, Ph.D.

Section 510(k) of the Food, Drug, and Cosmetic Act requires device manufacturers to notify the Food and Drug Administration (FDA) of their intent to market a medical device. This process, known as Premarket Notification, allows the FDA to determine if an equivalent legally marketed device already exists, ensures that the new device is properly identified and classified, and that the device is cleared for use1. For devices that pose a serious level of risk of illness or injury to the user, such as those that are used internally or to sustain life, a Premarket Approval (PMA) submission is required. This is the most stringent type of approval application required by the FDA, and requires information on how the medical device was designed and manufactured as well as any preclinical and clinical studies on the device. For a medical device to acquire PMA, it must be backed by sufficient valid scientific evidence assuring that it is safe and effective for its intended use.

As part of the requirements for 510(k) clearance or PMA, medical devices must be examined using appropriate FDA guidances; the recommended guidance documents depend on the classification and the intended use of the device. For example, for the development of a 510(k) in vitro diagnostic (IVD) device intended for the detection of Clostridium difficile, the FDA has issued a draft guidance entitled, “Draft Guidance for Industry and Food and Drug Administration Staff – Establishing the Performance Characteristics of In Vitro Diagnostic Devices for the Detection of Clostridium difficile.”2 This particular guidance recommends various analytical, clinical, and cross-contamination studies for establishing the performance characteristics of IVDs developed for the detection of C. difficile in stool samples via antigen-, antibody-, or nucleic acid-based tests.

One of the key features of this guidance, and those similar to it, is the recommendation for determining the analytical sensitivity and cross-reactivity through the use of authenticated, characterized strains. For determining the analytical sensitivity of a C. difficile detection assay, the FDA recommends the use of a variety of strains that represent the various known C. difficile toxinotypes (0; IIIb; IIIc; tcdA- , tcdb-; V; VIII, XII, and XXII). To analyze cross-reactivity, the FDA recommends the use of medically-relevant viruses and bacteria of varying species such as Bacillus cereus, Citrobacter freundii, and Clostridium tetani.

To support the need for highly characterized strains, ATCC has fully authenticated and described microbial strains that are recommended in guidances for industry. For the aforementioned C. difficile guidance, ATCC offers a number of C. difficile strains that have been genotypically and phenotypically authenticated as well as functionally characterized for toxinotype, binary toxin, and ribotype. These defined characteristics, along with the provided isolation history, allows for the easy selection of strains recommended for testing the analytical sensitivity of novel medical devices. Moreover, the expansive breadth of the ATCC collection allows for the easy obtainment of representative strains for cross-reactivity testing.

ATCC similarly supports a number of other guidance documents, such as the “Draft Guidance for Industry and Food and Drug Administration Staff - Establishing the Performance Characteristics of Nucleic Acid-Based In vitro Diagnostic Devices for the Detection and Differentiation of Methicillin-Resistant Staphylococcus aureus (MRSA) and Staphylococcus aureus (SA)” 3. In this latter guidance, characterized S. aureus strains with known SCCmec type and PFGE type are needed for establishing analytical sensitivity, and pathogenic and commensal flora found in the nares should be tested to analyze cross-reactivity. To aid in the development of these diagnostic devices, ATCC has fully characterized a majority of the S. aureus strains in the collection for both SCCmec type and PFGE type, and have confirmed the presence of the mecA gene in methicillin-resistant strains.

Overall, when developing a novel medical device, it is important to ensure that the device is properly evaluated and verified based on FDA guidance recommendations prior to submitting it for 510(k) clearance or PMA. Using authenticated, fully characterized strains from an ISO accredited and certified standards development organization, such as ATCC, can help ensure the reliability and reproducibility of analytical sensitivity and cross-reactivity data, thus confirming the efficacy and validity of the device in question.


References
  1. Food and Drug Administration. Premarket Notification (510k). Available online: http://www.fda.gov/medicaldevices/deviceregulationandguidance/howtomarketyourdevice/premarketsubmissions/premarketnotification510k/default.htm  
  2. Food and Drug Administration. Draft Guidance for Industry and Food and Drug Administration Staff - Establishing the Performance Characteristics of In Vitro Diagnostic Devices for the Detection of Clostridium difficile. Available online: http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm234868.htm.
  3. Food and Drug Administration. Draft Guidance for Industry and Food and Drug Administration Staff - Establishing the Performance Characteristics of Nucleic Acid-Based In vitro Diagnostic Devices for the Detection and Differentiation of Methicillin-Resistant Staphylococcus aureus (MRSA) and Staphylococcus aureus (SA). Available online: http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm237235.htm


Thursday, July 31, 2014

Infectious Disease Assay Development: Choosing the Appropriate External Controls

Cara N. Wilder, Ph.D.

During the development of a molecular-based assay for infectious disease research, or when using a pre-qualified assay or sequencing tool, it is important to select appropriate external controls to evaluate and verify the performance of each process. This testing is imperative in tracking drift and run-to-run variation within a procedure. In this third of three articles, we will discuss the importance of choosing the appropriate external controls, and will provide information on how to select the appropriate cultures and nucleic acids for your tests.

There are a number of different types of external controls that should be employed as part of your good laboratory practices when developing, validating, or evaluating a novel molecular-based assay or tool. These controls are positive or negative references that are treated in parallel with test specimens to verify technical performance and interpret the quality of data. When used properly, external controls can both confirm that a test is performing correctly as well as help identify problems in the event of a test failure.

External controls can be used to test a number of sources of variability, including sample collection, nucleic acid extraction procedures, sample preparation, and data acquisition. For example, let’s say you are evaluating a quantitative real-time PCR assay for the detection of a specific pathogen. When processing each batch of samples, you would want to include external controls that represent the strains of the targeted pathogen. These control samples should be prepared, extracted, and tested in the exact same manner as each sample. Subsequently, the derived results from the control samples during each stage of your procedure should then be analyzed prior to examining the sample results. If the assay does not perform as expected, all results for each of the samples should be considered invalid, and the assay re-run.

The difficulty in obtaining and employing the ideal controls lies in how reliable and suitable it is for a particular assay. A control that may work for one type of assay or platform may not necessarily work for another. For this reason, it is essential that the external controls used are optimized for the specific assay or platform being tested. To aid in assay validation, ATCC offers an expansive array of authenticated cultures and nucleic acid preparations for use as external controls in nucleic acid extraction, process verification, amplification, and proficiency testing. Each of these products are prepared as high-quality, authenticated materials backed by meticulous quality control procedures, making them ideal as external controls for process validation.

Overall, choosing the ideal external control is critical in the evaluation, verification, and validation of novel assays or tools. Through the use of appropriate authenticated strains and nucleic acids, run-to-run variation, sample preparation, and assay execution can be properly analyzed.

Thursday, July 17, 2014

Infectious Disease Assay Development: Determining the Limit of Detection

Cara N. Wilder, Ph.D.

Determining the detection limit is an essential part of infectious disease assay development and design. In this second of three articles, we will discuss the importance of determining the detection limit in establishing analytical sensitivity, and will provide information on how to establish this parameter when evaluating your experimental design.

During the development of an assay or diagnostic method used to determine the presence of a specific pathogen, it is important to establish how effectively the assay can detect lower concentrations of the target strain – particularly if the strain has a low infectious dose. This critical part of infectious disease assay development is often termed the limit of detection (LOD), and can be defined as the minimum amount of the target strain or DNA sequence that can be reliably distinguished from the absence of the sample within a given level of confidence (ex. 95% confidence level).

The methods used to establish LOD can vary depending on assay type and use. For example, the LOD of a particular instrument-based system is measured with either a pure culture or nucleic acid sample. In contrast, when analyzing clinical or environmental LOD, quantified samples are spiked into an appropriate matrix (e.g. soil, water, blood, feces) and are then analyzed following various recovery and concentration procedures. Compared to determining an instrument LOD, examining clinical or environmental LOD is often associated with a number of challenges including the potential for environmental inhibitors, loss of the organism, or the presence of impurities. At each step of the recovery process, there is the potential for sample loss, which directly affects the LOD; thus, for these types of assays, improving process efficiency is imperative for ensuring assay sensitivity.

When analyzing the analytical sensitivity of an assay, the significance of your results can be dependent on the dilution range used as well as the number of replicates. Prior to your analysis, it is important to first quantify your samples, or obtain authenticated samples with a pre-established concentration. Following the quantification of your control samples, each sample should be serially diluted around an appropriate concentration that was previously determined through a range finding study. Depending on the assay, the dilution series may vary in the number of dilutions used (i.e. the number of samples) as well as the extent of the dilution (i.e. 2x, 5x, 10x, etc.). The closer you can get the dilution series around your target concentration, the more accurately you will be able to determine your LOD. Once your dilution series is prepared, each dilution should be tested against your assay in replicate (at least 20-60 times).

For example, let’s say you wanted to develop an end-point PCR-based approach for identifying Clostridium difficile in stool samples. When analyzing the LOD of your assay, you would first want to acquire strains representing the major known toxinotypes, and then quantify the concentration of each culture preparation. Following a range finding study, you would then prepare an appropriate dilution series for the samples and spike each dilution into a stool sample. Following suitable recovery and concentration procedures, at least 20 replicates for each dilution should be tested for identification by your PCR-based system as well as confirmed by colony counting. If the concentration for your strains at which ≥95% of the replicates were detected by the PCR-based system resulted in 340 cfu/mL, 250 cfu/mL, 60 cfu/mL, and 430 cfu/mL, it would indicate that the overall limit of detection for your assay is 430 cfu/mL organisms present in stool.

When obtaining strains for determining the limit of detection, it is important to go to a reliable source that provides authenticated reference standards that are titered or quantitated. This will ensure that your strains are well-characterized, as well as accurately quantified for concentration or genome copy number. At ATCC, we maintain a portfolio that expands a vast variety of microorganisms and nucleic acids that are quantified by commonly used methods including PicoGreen®, RiboGreen®, Droplet Digital™ PCR, spectrophotometry, or culture-based approaches. Moreover, ATCC Genuine Cultures® and ATCC® Genuine Nuleics are fully characterized using a polyphasic approach to establish identity as well as confirm characteristic traits, making them ideal for determining the detection limit of your assay.

Overall, determining the detection limit is critical in assay development and validation. Through the use of a diverse array of authenticated strains and nucleic acids that are accurately quantified, assay sensitivity can be established.


Tuesday, June 24, 2014

Infectious Disease Assay Development: Establishing Inclusivity/Exclusivity

Cara N. Wilder, Ph.D.

Optimizing experimental conditions during assay development can be challenging, particularly with respect to establishing analytical sensitivity (including limits of detection (LoD)) and specificity, as well as identifying and employing the appropriate external controls. In this first of three articles, we will discuss the importance of inclusivity/exclusivity in validating assay sensitivity and specificity, and will provide information on how to establish these parameters when evaluating your experimental design.

Assay sensitivity and specificity are often described using the terms inclusivity and exclusivity, but what do these terms actually mean? Depending on whether your assay is culture- or molecular-based, inclusivity can be defined as the percentage of target microbial strains or DNA samples that give the correct positive result. In contrast, exclusivity can be defined as the percentage of non-target microbial strains or DNA samples that give the correct negative result. For example, if you are developing an assay for the detection of Staphylococcus aureus in clinical samples, you would want to ensure that your assay is inclusive for each of the different S. aureus subspecies while being exclusive for other related species or non-related genera such as Staphylococcus epidermidis or Escherichia coli, respectively.

Establishing ideal inclusivity/exclusivity parameters is an essential part of assay validation, particularly when evaluating diagnostic and epidemiological assays whose results can affect public health. In many cases, the rapid and accurate identification of an infectious pathogen is critical for the timely administration of appropriate therapeutic agents as well as the prevention of transmission. Thus, to ensure the precision of your diagnostic assay, choosing a suitable sample size of the appropriate representative strains or nucleic acids is imperative.

Determining which strains to choose for inclusivity/exclusivity testing can be a daunting task. Prior to selecting your test strains, it is important to know basic information about your target organism so that it can be applied in the development of your inclusivity and exclusivity testing panels. For inclusivity testing, the use of microbial or nucleic acid panels that encompass common strain variants as well as those representing all known subspecies of the target organism is recommended. In contrast, exclusivity can be established and evaluated through the use of cross-reactivity panels that include genetically related species that are in the same genus or family, genera that share an environmental or clinical niche with the target organism, and microbial species commonly observed in the test sample.

For instance, let’s say you developed a molecular-based diagnostic assay for the detection of Klebsiella pneumoniae in respiratory infections and you wanted to evaluate its sensitivity and specificity. First, you would want to gather a bit of background on this microbial species. Based on previous studies, this particular bacterium has been commonly found in the normal flora of the mouth, skin, and intestines, and can cause respiratory and urinary tract infections in immunologically compromised individuals. Moreover, K. pneumoniae is a significant member of the Enterobacteriaceae family, is related to at least three other species in the Klebsiella genus, and comprises three known subspecies. With this in mind, you would want to ensure that your inclusivity testing panel included nucleic acids isolated from strains representing the three known K. pneumoniae subspecies as well as strain variants frequently isolated from clinical samples. For your exclusivity panel, you would want to include nucleic acids isolated from strains representing other known Klebsiella species (e.g. K. granulomatis, K. oxytoca, K. terrigena), isolates that share the same clinical and natural niches as K. pneumoniae (e.g. E. coli, Citrobacter spp., Proteus spp., etc.), and other organisms commonly found in clinical respiratory samples from both healthy and immunologically compromised patients (e.g. Pseudomonas aeruginosa, Burkholderia cepacia, Streptococcus pneumoniae, etc.).

In addition to choosing the appropriate strains, having a large sample size is important in determining the significance of your experimental results. Using the example above, let’s say that your inclusivity panel included 50 strains encompassing common K. pneumoniae strain variants and representatives of the three known subspecies, and your exclusivity panel included 100 strains encompassing related, non-target microbial strains. If your assay was able to accurately detect 49 of the 50 inclusivity strains, it would indicate that the test has 98% sensitivity for the sample set analyzed. If your assay was unable to detect 95 of the exclusivity strains, it would indicate that the test has 95% specificity for the sample set analyzed. Taking this data into account, along with other factors such as sample size, the statistical likelihood of false positives or false negatives, etc., you could infer that there is a high probability that the test would accurately detect the presence of K. pneumoniae if a patient was infected with it and that there is a high probability that the test would give a negative result if the patient was well or was infected with a different microbial species.

When obtaining strains for analytical sensitivity and specificity testing, it is important to go to a reliable source that provides authenticated reference standards. This will ensure that your strains are accurately identified down to the species or strain level, as well as functionally characterized for any important traits such as serotype, toxin production, drug-resistance, or clinical relevance, including newly emerging subtypes. Currently, biological reference standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, maintains a portfolio that encompasses a vast variety of relevant strains, variants, and nucleic acids. Moreover, ATCC Genuine Cultures® are fully characterized using genotypic, phenotypic, and functional analyses to establish identity as well as confirm characteristic traits, making them ideal for inclusivity/exclusivity validation studies.

Overall, ensuring sensitivity and specificity is critical in assay development and validation. Through the use of a diverse array of authenticated, highly characterized strains that represent your target organism or non-target species, assay sensitivity and specificity can be established.

Friday, April 4, 2014

The Need for Tuberculosis Reference Standards in Vaccine Development

Cara N. Wilder, Ph.D.

Tuberculosis (TB), a highly contagious respiratory disease caused by the bacterium Mycobacterium tuberculosis, annually results in over two million deaths worldwide1. In the United States alone, the Centers for Disease Control and Prevention (CDC) reported a total of 9,588 new cases of TB in 2013; of which, 86 cases were multidrug-resistant. This infection is commonly spread by the aerosolization of the bacteria via coughing, sneezing, speaking, or singing. Clinical symptoms of TB include chronic cough with blood-tinged sputum, fever, weight loss, and the formation of tubercules in the lungs1.

Mycobacterium tuberculosis
To prevent the spread of TB in endemic countries, the Bacillus Calmette–GuĂ©rin (BCG) vaccine is used. This vaccine, which was first introduced in 1921, is derived from an attenuated live bovine tuberculosis bacillus, Mycobacterium bovis, which is non-virulent in humans. Following its introduction into the World Health Organization Expanded Programme on Immunization in 1974, use of the BCG vaccine has reached global coverage rates of >80% in countries where TB is prevalent2.

On average, the BCG vaccine has been found to reduce the risk of TB by 50%, with estimates of protection ranging from 0-80%3. Moreover, it does not prevent primary infection or the reactivation of latent pulmonary infection2. These variations in vaccine efficacy have been attributed to a wide range of factors, including genetic or nutritional differences between populations, environmental influences,  exposure to other microbial infections, or the methods used to prepare the vaccine3. Overall, the impact of the current BCG vaccine on reducing the transmission of TB is limited.

In recent years, the genomic plasticity of BCG vaccine strains was offered as another possible explanation for variable efficacy4. In the early years of vaccine development, prior to the introduction of archival seed lots, vaccine strains were maintained by serial passaging. Following the implementation of proper cold-chain maintenance procedures, several different BCG seed strains were preserved for use in vaccine development. However, by that point, years of subculturing ultimately resulted in significant differences in the genomes of each strain; where, comparative genomics have uncovered deletions, insertions, and single nucleotide polymorphisms that may have contributed to diminished vaccine efficacy5.

To help control for the intrinsic differences between BCG strains, the use of a single biological standard in vaccine development should be considered. Generally, a biological standard is defined as a well-characterized, authenticated, purified biological reference material – qualities of which are essential in minimizing variations between vaccine preparations. Through the use of a single, minimally-passaged M. bovis standard, one of the contributing factors affecting vaccine efficacy can be accounted for; thus, potentially improving the quality of the vaccine preparation and ensuring that all recipients are receiving the best possible protection against TB.

Currently, biological standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, offers a number of M. bovis strains, including those known to demonstrate resistance to isoniazid. Each of these ATCC® Genuine Cultures is fully characterized using genotypic, phenotypic, and functional analyses to establish identity. Moreover, each strain is carefully preserved as low passaged stocks using a seed stock system to minimize subculturing and maintain the original culture characteristics.

Overall, there have been a number of factors attributed to the variations seen in BCG vaccine efficacy. Through the use of a single, consensus, biological standard that demonstrates high levels of protection when used in vaccine development, manufacturers can come one step closer to improving BCG vaccine performance.

 
 
References
  1. CDC. Tuberculosis (TB), <http://www.cdc.gov/tb/> (2013).
  2. WHO. BCG Vaccine, <http://www.who.int/biologicals/areas/vaccines/bcg/en/> (2014).
  3. Fine, P. E. Variation in protection by BCG: implications of and for heterologous immunity. Lancet 346, 1339-1345 (1995).
  4. Brosch, R. et al. Genome plasticity of BCG and impact on vaccine efficacy. Proc Natl Acad Sci U S A 104, 5596-5601, doi:10.1073/pnas.0700869104 (2007).
  5. Behr, M. A. et al. Comparative genomics of BCG vaccines by whole-genome DNA microarray. Science 284, 1520-1523 (1999).

 

Friday, January 24, 2014

Biological Standards in Life Sciences – Enhancing Reproducibility

Cara N. Wilder, Ph.D.

With tremendous breakthroughs being made in the life sciences every day, it is critical that reported and published data are not only reliable and accurate, but reproducible as well. Unfortunately, with the inherent variability of biological materials and reagents, as well as the differences in analytical techniques and data reporting, irreproducibility is still a universal problem throughout both commercial and academic settings. In fact, this issue has resulted in significant, long-lasting effects including extensive losses in time and funding, perpetuation of false data, reputational damage, and impairment of professional relationships and collaborations. Here, we will briefly discuss how variations in biological materials can affect reproducibility and how the use of organisms as standards can help counteract these effects.

 Within the life sciences, irreproducibility can be caused by a number of underlying factors ranging from the biological materials and reagents used for a set of experiments to how the experiment is performed and analyzed. For instance, bacteria can vary quite a bit at the species level, which is why we often see further characterization at the subspecies and strain level. Within any given bacterial species, representative strains will exhibit similar phenotypic and genotypic traits, such as characteristic morphologies, similar metabolic requirements, and conserved ribosomal 16S sequences. However, as similar as these strains may be, selective and societal pressures can lead to genetic mutations or the acquisition of laterally transferred genetic elements that may result in significant phenotypic changes such as variations in serotype, intracellular signaling, protein expression, pathogenicity, or drug-resistance. In turn, this can affect experimental reproducibility between research facilities that are not using the same variant. Overall, the inherent differences of biological materials bring unique challenges to establishing reliable assays.

To help control for the intrinsic differences between the strains used within the life sciences, and thus enhance experimental reproducibility, the use of biological standards is recommended. Biological standards are defined as well-characterized, authenticated, purified biological reference materials – qualities of which are essential for their effective use in assay validation and calibration, research and development, diagnostics, etc. For example, when testing consumable or pharmaceutical products for specific microbial pathogens, the use of appropriate biological reference materials can ensure that the assay is sensitive and precise enough to detect the presence of objectionable microbial contaminants. The use of biological standards is equally important in clinical settings for the detection and identification of infectious agents. As you can image, the sensitivity and specificity of these assay can have a profound effect on public health.

Currently, biological standards are developed and produced by a number of entities, including government agencies, commercial companies, and non-profit institutions. ATCC, for example, produces both animal cell lines and microorganisms as Certified Reference Materials (CRMs). These biological standards are produced under an ISO Guide 34:2009 accreditation -- a process that offers confirmed identity, well-defined characteristics, and an established chain of custody. Moreover, ATCC CRMs are stable with respect to one or more specified properties, which makes them ideal for performing in challenge assays, verifying or comparing test methods, and benchmarking critical assay performance during assay validation or implementation.

Overall, the inherent variability of biological materials can significantly affect the quality and reproducibility of data. Through the use of standardized biological reference materials, assay consistency and accuracy can be improved.