Oct 272016

Image result for Pharmaceutical Water

The quality of the source water used to produce pharmaceutical water plays an important role for both the design of the treatment and the validation of the water system. FDA Warning Letters over the past few years have shown that compliance with the specification of pharmaceutical water is not enough. A validation of the treatment process is expected. This includes documentation of the process capacity to produce pharmaceutical water according to specification. If we do not know the quality of the source water, however, the purification capacity is not known either. As a consequence, fluctuations of the quality of the source (feed) water quality may lead to water that does not comply with the specification after purification. Or it is not known up to which quality level of the source water pharmaceutical water that complies with the specification can be produced. Therefore, it is important to know the impurities respectively their concentration in the source (feed) water.
The production of pharmaceutical water is always based on drinking water. The specifications for drinking water however (for Germany, stipulated in the Trinkwasserverordnung; for the U.S., in the National Primary Drinking Water Regulation) are defined very broadly compared to Pharmacopoeial specifications.

The quality of the drinking water varies widely as well, as drinking water may come from different sources (ground water or surface water). Even the ground water quality varies locally, e. g., depending on the season. This is why water purification plants for the pharmaceutical industry are not ready-made goods, but individual solutions that have to be developed by the future user and the plant supplier together. The plant supplier will always ask about the quality of the drinking water so that he can offer the appropriate processing technologies.

In particular, he will need the following information. For this purpose, it is useful to provide the plant engineer with various drinking water analyses over a minimum period of twelve months.

For the design of a pharmaceutical water plant, the indicator parameters according to the Trinkwasserverordnung (conductivity, iron, manganese, sulphate and pH value) are important, as the amount of the ionic load determines the treatment process. For instance, a single-stage or double-stage reverse osmosis may be sufficient to obtain adequate quality at low conductivity levels. Iron and manganese are limited by the drinking water ordinance, but will lead to irreversible membrane damage at the reverse osmosis plant when their limits (according to the Trinkwasserverordnung) are exceeded.

Image result for Pharmaceutical Water

Furthermore, information on the total hardness is indispensable, as it has a major influence on the design of the softening plant – as well as on carbonate hardness or base capacity which are used to calculate the amount of dissolved carbon dioxide. This parameter restricts the use of EDI or may require further treatment, such as membrane degassing.

Depending on the origin of the drinking water, a responsible plant engineer should measure the colloid index (SDI 15) before designing the plant. Especially with surface water, higher amounts are to be expected. A colloid index of more than 5%/min can already have a negative impact on the operation of a reverse osmosis plant (membrane blocking and/or fouling) and may require additional treatment techniques, such as ultrafiltration before the main plant. While the colloid index is never determined via the water supplier, the silicate content is often indicated in the drinking water analysis. A silicate content of more than 25 ppm can become critical for a combination of reverse osmosis and EDI and should also be determined in case it is not indicated in the analysis.

All microbiological parameters have been regulated in the Trinkwasserverordnung. However, you should always remember that the supplier guarantees the quality only up to the point of transfer. With regards to the total bacteria count in particular, regular tests are necessary in order to identify seasonal fluctuations.


Image result for Pharmaceutical Water



Image result for Pharmaceutical Water

////////////Critical Impurities, Pharmaceutical Water

Oct 272016

Image result for visual inspection of medicinal products for parenteral use.

At the Particles in Parenterals Conference Dr Stephen Langille from the US FDA gave a talk on the FDA’s current thinking with regard to the visual inspection of medicinal products for parenteral use.


Dr Stephen Langille from the US FDA gave a talk on the FDA’s current thinking with regard to the visual inspection of medicinal products for parenteral use. In his presentation he showed the number of recalls caused by visible particulate matter over the last 11 years. For him, most of the recalls were justified when the types of particles found were taken into consideration. He also emphasized that something is possibly wrong in the visual inspection process if particles found in the market are bigger than 1000 µm.

The prevention of particles is very important to him. From his perspective the best particle is one which is not in the product. Also important to him are threshold studies, meaning to show the minimum particle size which can still be detected (dependent of product and type of container). These threshold studies are crucial for the setup of the test sets and the qualification of the inspectors of the manual inspection. He also mentioned the semi-automated inspection process. For him semi-automated inspection is good for detecting container-closure issues, like missing stoppers. But he also questioned whether an inspection time of about one second is suitable to detect particles with a size of 200µm for example. In the discussion he was asked about FDA’s opinion on the USP chapter <790>. In his opinion, USP chapter <790> can be an effective tool for ensuring that the manufacturing process and 100% inspection process are adequate to limit visible particle contamination. However, cGMPs must be followed during the manufacturing and visual inspection process. Meeting the requirements of USP <790> should not be used to excuse not meeting cGMPs.

You will find the complete presentation in the members area of the ECA webpage.


.///////////FDA presentation, ECA Conference , Particles in Parenterals

Oct 272016

Image result for Oral Inhalation and Nasal Drug Products

The FDA draft guidance for combination products has a substantial impact on the development of Oral Inhalation and Nasal Drug Products (OINDPs) as it requires that the manufacturers have to be compliant not only with CGMPs for the drugs (21 CFR Parts 210 and 211) but also with the quality system (QS) regulations for devices (21 CFR Part 820). Find out more about the FDA Draft Guidance for Combination Products.,Z-QCM_n.html

Based on the CGMP requirements for single-entity and co-packaged combination products (21 CFR Part 4) the manufacturers of Oral Inhalation and Nasal Drug Products (OINDPs) have to be compliant with CGMPs for the drug constituent part(s) (21 CFR Parts 210 and 211) and the quality system (QS) regulations for device constituent part(s) (21 CFR Part 820).

This can be achieved either by a drug CGMP-based streamlined approach (21 CFR 4.4(a)) or a QS regulation-based streamlined approach (21 CFR 4.4(b)).  Following the first approach the combination product manufacturers have to be compliant with the drug CGMP and device QS regulation requirements:

– 21 CFR 820.20 – Management responsibility
– 21 CFR 820.30 – Design controls
– 21 CFR 820.50 – Purchasing controls
– 21 CFR 820.100 – Corrective and preventive actions
– 21 CFR 820.170 – Installation
– 21 CFR 820.200 – Servicing

The OINDP manufacturers have to be clearly stated in their submission and at the initiation of a pre-approval inspection (PAI) whether they are operating under the drug CGMP or QS regulation-based approach.

Here you can see the complete FDA Draft Guidance on Combination Products including the requirements for Oral Inhalation and Nasal Drug Products.
////// FDA Combination Products Guidance, Nasal and Oral Inhalation,  Drug Products

Oct 272016

Image result for Counterfeit medicine


Counterfeit medicine is an increasing problem for public health and economy. This is no longer a problem of certain regions such as Asia and Africa. It has now also become an issue in the EU and US. The European Union Intellectual Property Office (EUIPO) published a press release on 29 September 2016 in which they state that fake medicines cost the EU pharmaceutical sector 10.2 billion Euro every year. Read more about the latest figures on counterfeit medicines,S-QSB_n.html


str1 str2

Counterfeit medicine is an increasing problem for public health and economy. This is no longer a problem of certain regions such as Asia and Africa. It has now also become an issue in the EU and the US. In the past, counterfeit medicines could not enter the legal supply chain in the EU and US. But the problem has now also been arising in western countries. A number ofcases of counterfeit medicines were detected recently. In order to cope with this increasing problem, the EU has introduced a regulation which requires that as of 9th February 2019 certain medicinal products can only enter the EU market if a 2D barcode is used as a safety feature. This code must be applied on the packaging in readable form.

The European Union Intellectual Property Office (EUIPO) published a press release on 29 September 2016 in which they state that fake medicines cost the EU pharmaceutical sector 10.2 billion Euro every year. The counterfeit products cause a loss of 4.4% of the legitimate sales of pharmaceuticals. This means “37,700 jobs directly lost across the pharmaceutical sector in the EU” according to the report. Only for Germany, an annual loss of 1 billion Euro has been calculated which caused a direct job loss of 6,951. Regarding other countries, the figures are: Italy 1.59 billion, France 1 billion, Spain 1,17 billion and UK 605 million loss annually.

Source: Press Release EUIPO, September 29, 2016


//////////Counterfeit of medicines, 37,000 job losses,  EU Pharma Industry

Oct 232016

str0 str1 str2



Compound 3aa was obtained as pale yellow oil (163 mg, 92% yield).MS (ESI): mass calcd. for C12H16O3, 208.1099; m/z found, 209.1102 [M+H] + .

1H NMR (CHLOROFORM-d, 400MHz): δ = 7.45 (d, J=7.7 Hz, 2H), 7.33 (t, J=7.5 Hz, 2H), 7.21-7.27 (m, 1H), 4.37 (s, 1H), 4.00-4.18 (m, 2H), 2.97 (d, J=15.9 Hz, 1H), 2.79 (d, J=15.9 Hz, 1H), 1.55 (s, 3H), 1.08-1.18 ppm (m, 3H).

13C NMR (CHLOROFORM-d, 101MHz): δ = 173.1, 147.3, 128.6, 127.3, 124.9, 73.2, 61.4, 46.9, 31.1, 14.4 ppm




The application of Reformatsky and Blaise reactions for the preparation of a diverse set of valuable intermediates and heterocycles in a one-pot protocol is described. To achieve this goal, a novel green activation protocol for zinc in flow conditions has been developed to introduce this metal efficiently into -bromoacetates. The organozinc compounds were added to a diverse set of ketones and nitriles to obtain a wide range of functional groups and heterocyclic systems in a one pot procedure.!divAbstract

Reformatsky and Blaise Reactions in Flow as a Tool for Drug Discovery. One Pot Diversity Oriented Synthesis of Valuable Intermediates and Heterocycles.

Green Chem., 2016, Accepted Manuscript

DOI: 10.1039/C6GC02619B

////////////Reformatsky, Blaise Reactions ,  Flow chemistry,  Drug Discovery. One Pot,  Diversity Oriented Synthesis, Valuable Intermediates,  Heterocycles.

Oct 212016


QbD in Pharma Development World Congress 2017 Registration

3 for 2 Offer

SELECTBIO are offering 3 for the price of 2 on all delegate passes. To take advantage of this offer contact us by email, phone or click the Contact Us button below. Looking for more than 3 passes? Contact us for more information on our special rates for large groups.

Radisson Hyderabad HITEC City

Radisson Hyderabad HITEC City


To find low cost flights, try the following websites:


The refined Radisson Hyderabad Hitec City features the prompt services, such as a concierge, and comfortable, air-conditioned rooms you need for a satisfying visit. You can stay fit with laps in the swimming pool and fitness centre, or relax in your suite with 24-hour room service and free Wi-Fi access. The aminities include Satellite TV, Work desk, Wi-Fi access, Tea and coffeemaker, Bottles of mineral water, Large bathrooms with separate rain showers, Large wardrobe, Mini barWrap up a day of meetings with authentic Indian cuisine at Cascade or exotic Asian specialties at The Oriental Blossom. Treat your colleagues to drinks at Zyng lounge bar, or if the weather is nice, gather outside at Poolside Grill for a barbecued meal.
A business centre is also available to help you complete work while staying at this Hitec City hotel in the heart of Gachibowli, a new-age IT suburb of Hyderabad, near the Hyderabad International Convention Centre.
SELECTBIO has negotiated special rates (see below) to include buffet breakfast, and Wi-Fi. Standard Room Single/Double – INR 4500/5000+Tax
To make your reservation at these discounted rates please contact Sakshi Modgil at We recommend early booking to avoid disappointment.

Sakshi Modgil's Profile PhotoSAKSHI MODGIL

Visa Requirements
International visitors travelling from outside India will require a Business visa.
International visitors will require an invitation letter to obtain their Business visa. We will only provide invitation letters to customers that are fully registered for the event. In the event of an unsuccessful visa application we will refund the full delegate fee paid.

Visa Invitation Requirement
Please ensure that the above form is duly complete, as it will expedite the preparation of an invitation letter. Also mention clearly, to whom the invitation letter should be addressed as per the requirement of the country of origin.
For more information on Indian visa’s applications for International visitors, please contact your local Indian embassy.
Please plan sufficiently in advance because processing of Indian Visa Application may take 4-6 weeks.

Copyright © 2016 SELECTBIO, All rights reserved.

This email was sent from SELECTBIO Ltd to

SELECTBIO Ltd, Woodview, Bull Lane, Sudbury, CO10 0FD, United Kingdom.

//////////QbD,  Pharma Development,  World Congress,  2017, SelectBio, Radisson Hyderabad HITEC City, India

Oct 202016


Abstract Image

The design, development, and scale up of a continuous iridium-catalyzed homogeneous high pressure reductive amination reaction to produce 6, the penultimate intermediate in Lilly’s CETP inhibitor evacetrapib, is described. The scope of this report involves initial batch chemistry screening at milligram scale through the development process leading to full-scale production in manufacturing under GMP conditions. Key aspects in this process include a description of drivers for developing a continuous process over existing well-defined batch approaches, manufacturing setup, and approaches toward key quality and regulatory questions such as batch definition, the use of process analytics, start up and shutdown waste, “in control” versus “at steady state”, lot genealogy and deviation boundaries, fluctuations, and diverting. The fully developed continuous reaction operated for 24 days during a primary stability campaign and produced over 2 MT of the penultimate intermediate in 95% yield after batch workup, crystallization, and isolation.


Development and Manufacturing GMP Scale-Up of a Continuous Ir-Catalyzed Homogeneous Reductive Amination Reaction

Lilly Research Laboratories, Eli Lilly and Company, Indianapolis, Indiana 46285, United States
Eli Lilly SA, Dunderrow, Kinsale, Cork, Ireland
D&M Continuous Solutions, LLC, Greenwood, Indiana 46113, United States
Org. Process Res. Dev., Article ASAP
DOI: 10.1021/acs.oprd.6b00148
Publication Date (Web): October 19, 2016
Copyright © 2016 American Chemical Society
*E-mail (Scott A. May):, *E-mail: (Martin D. Johnson):, *E-mail: (Declan D. Hurley)

ACS Editors’ Choice – This is an open access article published under an ACS AuthorChoice License, which permits copying and redistribution of the article or any adaptations for non-commercial purposes.


Oct 182016
Abstract Image

We investigated how many cases of the same chemical sold as different products (at possibly different prices) occurred in a prototypical large aggregated database and simultaneously tested the tautomerism definitions in the chemoinformatics toolkit CACTVS. We applied the standard CACTVS tautomeric transforms plus a set of recently developed ring–chain transforms to the Aldrich Market Select (AMS) database of 6 million screening samples and building blocks. In 30 000 cases, two or more AMS products were found to be just different tautomeric forms of the same compound. We purchased and analyzed 166 such tautomer pairs and triplets by 1H and 13C NMR to determine whether the CACTVS transforms accurately predicted what is the same “stuff in the bottle”. Essentially all prototropic transforms with examples in the AMS were confirmed. Some of the ring–chain transforms were found to be too “aggressive”, i.e. to equate structures with one another that were different compounds.

Experimental and Chemoinformatics Study of Tautomerism in a Database of Commercially Available Screening Samples

Chemical Biology Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Frederick, Maryland 21702, United States
§ Basic Science Program, Chemical Biology Laboratory, Leidos Biomedical Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland 21702, United States
J. Chem. Inf. Model., Article ASAP
Publication Date (Web): September 26, 2016
Copyright © 2016 American Chemical Society
Laura Guasch

Laura Guasch

Chemoinformatics Data Scientist

National Institutes of Health
Bethesda, MD, United States

a highly motivated computational chemist and cheminformatician that employ, analyze, and develop computer-based methods to aid in the drug discovery. I enjoy working with experimentalists from the fields of biology, pharmacy, medicine and chemistry on answering relevant questions in drug discovery.

Specialties: CADD; Virtual screening; Pharmacophores; Docking; Homology modeling; Quantum chemistry; SAR/QSAR; ADME/Tox modeling; Drug metabolism; Chemoinformatics; Data mining; Molecular informatics.

Research experience

  • Feb 2012–present PostDoc Position
    National Institutes of Health · National Cancer Institute (NCI): National Institute of Health · Chemical Biology Laboratory
    United States · Frederick
    Development of novel approaches for tautomerism analysis. Structure-based and ligand-based identification and design of anti-cancer and anti-viral agents.
  • Jan 2009–Jun 2009 Research Intern
    University of Innsbruck · Institute of General, Inorganic and Theoretical Chemistry · Theoretical Chemistry
    Austria · Innsbruck
    Discovery of Natural Product PPAR-gamma Partial Agonists by a Pharmacophore-Based Virtual Screening Workflow.
  • Sep 2007–Dec 2011 PhD Student
    Universitat Rovira i Virgili · Department of Biochemistry and Biotechnology · Nutrigenomics Research Group
    Spain · Tarragona
    Identification of natural products as antidiabetic agents using computer-aided drug design methods.
  • Jan 2007–Jun 2007 Research Fellow
    Universitat Rovira i Virgili · Department of Physical and Inorganic Chemistry · Quantum Chemistry Group
    Spain · Tarragona
    Prediction of enantiomeric excesses in asymmetric catalysis using a new QSSR approach based on three-dimensional DFT molecular descriptors
  • Jul 2006–Aug 2006 Summer Intership
    Barcelona Science Park · Quantum Simulation of Biological Processes
    Spain · Barcelona
    Structure and electronic configuration of compound I intermediates Penicillium vitale catalases using techniques of molecular dynamics simulation


  • Sep 2007–Jun 2008 Universitat Rovira i Virgili
    Nutrition and Metabolism · Master of Science
    Spain · Tarragona
  • Sep 2004–Jun 2007 Universitat Rovira i Virgili
    Biochemistry · BSc
    Spain · Tarragona
  • Sep 2002–Jun 2007 Universitat Rovira i Virgili
    Chemistry · Bachelor of Science
    Spain · Tarragona

Awards & achievements

  • Dec 2011 Award: European Doctorate Mention
  • Dec 2011 Award: PhD Extraordinary Award

Marc C. Nicklaus, Ph.D.

Marc C. Nicklaus, Ph.D.
Senior Scientist
Head, Computer-Aided Drug Design (CADD) Group
Dr. Nicklaus received his Ph.D. in applied physics from the Eberhards-Karls-Universitat, Tubingen, Germany, and then served as a postdoctoral fellow in the Molecular Modeling Section of the then called Laboratory of Medicinal Chemistry, NCI. He became a staff fellow in 1998, and a Senior Scientist in 2002. In 2000, he founded, and has been heading since then, the Computer-Aided Drug Design (CADD) Group.

Dr. Nicklaus pioneered work on making large small-molecule databases and related chemoinformatics tools available to the scientific public on the CADD Group’s web server. He also pioneered the analysis of conformational energies of small molecule ligands bound to proteins. As Head of the CADD Group, he oversees the group’s research program in chemoinformatics, fundamentals of protein-ligand interactions, and in silico screening for targets of high interest to NCI. He makes the latter resources available in collaborative projects to improve NCI’s efforts in hit identification and drug design.

Link to additional information about Dr. Nicklaus’ research.

Areas of Expertise

1) chemoinformatics, 2) small-molecule databases, 3) protein-ligand interactions, 4) (quantitative) structure-activity relationships, 5) computer-aided drug design, 6) computational chemistry


Marc C. Nicklaus, Ph.D.
Center for Cancer Research
National Cancer Institute
Building 376, Room 207
Frederick, MD 21702-1201
Ph: 301-846-5903 sends e-mail)

Computer-Aided Drug Design. The Computer-Aided Drug Design (CADD) Group is a research unit within the Chemical Biology Laboratory (CBL) that employs, analyzes, and develops computer-based methods to aid in the drug discovery, design, and development projects of the CBL and other researchers at the NIH. We split our efforts about evenly between support-type projects and research projects initiated and conducted by CADD staff members. We are implementing many projects, and making available resources developed by the CADD Group, in a Web-based manner. This offers three advantages: (1) it frees all users, including the group members themselves, from platform restraints and the concomitant expenses for specific software/hardware, (2) it makes resources and results immediately available for sharing among all collaborators regardless of their location, and (3) helps, without additional effort, further the mission of the NCI as a publicly funded institution by providing data and services directly to the (scientific) public.

Chemical Identifier Resolver (CIR). CIR works as a resolver for many different chemical structure identifiers (e.g. chemical names, InChI, SMILES etc.) and allows one to convert the given structure identifier into a full structure representation or another structure identifier including references to particular databases in which the corresponding structure or structure identifier occurs. CIR offers a simple to use, programmatic application programming interface (API) based on URLs requested by HTTP. This allows easy linking of CIR and its content to other scientific web services and program packages. CIR currently provides access to 120 million structure records.

Enhanced NCI Database Browser. The Enhanced NCI Database Browser can be used to search the 250,000-compound Open NCI Database. This dataset is the publicly available part of the half-million structure collection assembled by the NCI’s Developmental Therapeutics Program during the program’s 50+ years of screening compounds against cancer and, more recently, AIDS. Visit the CADD Group’s home page or the Enhanced NCI Database Browser service for more information.

Fundamentals of Protein-Ligand Interactions. The non-covalent binding of a drug to the binding site of an enzyme (or other biomacromolecule) is the fundamental process of most drug actions. In spite of a vast body of experimental data available on protein-ligand complexes, mostly obtained by X-ray crystallography, there are still open questions of how this binding process occurs at the atomic and quantitative energetic level. One of the issues is the range of conformational energies one can expect to find for the small-molecule ligand bound to proteins, which we found to be higher than generally assumed. This has led us to broader questions regarding x-ray crystallographic methodologies, such as whether quantum-mechanical refinement (or re-refinement) of protein ligand structures may improve structural quality in various ways.

HIV Integrase. A long-standing interest of our group has been HIV integrase (IN) as a drug development target. This enzyme catalyzes the integration of the viral DNA into the human DNA, which is an essential step in the viral replication cycle. Only a handful of approved drugs so far are based on IN inhibition. We have been utilizing all available experimental results, be they structural, mechanistic, or biochemical, to model and better understand inhibition of IN by small molecules. A recent expansion of these efforts is our work aimed at developing HIV microbicides for the prevention of infection with HIV by topical application such as vaginal gels.

Among our main collaborators are Stephen Hughes and Yves Pommier, NCI; Wolf-Dietrich Ihlenfeldt, Xemistry, Germany; Vladimir Poroikov, Russian Academy of Medical Sciences, Moscow; and Raul Cachau, Leidos, FNLCR.

Scientific Focus Areas:

Biomedical Engineering and Biophysics, Chemical Biology, Computational Biology, Structural Biology
/////////////Experimental, Chemoinformatics, Tautomerism,  Database,  Commercially Available,  Screening Samples
Oct 162016

Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Part One of Three


The FDA defined Process Validation in 1987 by the following: “Process Validation is establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specification and quality attributes.” 1The purpose of this article is to discuss how to validate a process by introducing some basic statistical concepts to use when analyzing historical data from Batch Records and Quality Control Release documents to establish specifications and quality attributes for an existing process. In an ideal world, the qualification of processing equipment, utilities, facilities, and controls would commence at the start up of a new plant or the implementation of a new system. This would be followed by the validation of the process based on developmental data and used to establish the product ranges for in process and final release testing. However, the ideal case may not exist and thus there are incidences where commissioning of facilities or new systems occurs concurrently with the qualification and process validation; or the facility and equipment are “existing” and there is no such documentation. Some facilities, equipment, or processes pre-date the above definition by many years and therefore have never been validated on qualified equipment, utilities, facilities, and controls. Additionally, in some cases, no developmental data exists to establish the product ranges for in process and release testing.

Basics of Process Validation

Before examining the existing processes, it is important to first understand the basic concepts of Process Validation. Figure 1 is a flow chart defining Process Validation from the developmental stage to the plant floor. As a simplistic example, a process begins with the raw materials being released, then the raw materials are mixed, pH is adjusted, purification occurs by gel chromatography, excipients are added for final formulation, and the product is filled and terminally sterilized. Each of these steps has defined functions and therefore would have a designed goal. For example, purification would not begin until the desired pH is reached in the previous step. Therefore, the desired pH is an in process attribute of the pH adjustment stage and the amount of buffer used to adjust the pH is a processing parameter. Each of these steps has attributes that one would want to monitor to determine that the product is being produced acceptably at that step such that the next process step can start. The ranges for these attributes are generally determined by process development data so that if the process attributes are met, then there is a high degree of confidence that the final container is filled with a product of acceptable attributes as determined by the developmental data.

Click on image for larger view

During Process Validation, there needs to be approved Standard Operating Procedures (SOPs) in place that Plant Operators have been trained on. Analytical testing should be performed by SOPs and the Quality Control (QC) analysts should be trained in these SOPs as well. The analytical tests should have been previously validated by normal analytical methods validation and documented as such. Also, the equipment used to prepare product must be documented to be qualified for its installation, operation, and performance, commonly referred to as IQ, OQ, and PQ. There are methods for performing such qualification retrospectively, but for the purpose of this discussion, it is only important to note that the equipment must be qualified.

Relating Process Validation to Existing Processes

Existing processes may lack developmental data for in process ranges and release testing. If a retrospective analysis of existing data is used to establish process ranges, including input, output, and in process testing parameters, then the process can be treated like a new process by following the basics of Prospective Process Validation. The difference between traditional Prospective Process Validation and Prospective Process Validation based on retrospective analysis is that in place of developmental data to establish ranges, the retrospective analysis reviews data from past Batch Production Records, QC test reports, product specs, etc. Thus the items that need to be in place are: approved SOPs, and Batch Records with personnel training, equipment qualification (IQ, OQ, PQ), QC methods validated, approved SOPs and training for QC personnel. With these in place, all that is missing is a Process Validation Protocol with defined ranges. The best sources of this information are approved completed batch records, process deviation reports, QC Release data, and small-scale studies. From these, the following items must be completed:

  • Critical parameters and input and output parameters must be defined.
  • A statistically valid time frame or number of batches must be determined.
  • The data used to establish the parameters must be extracted from controlled documents.
  • The data extracted from the controlled documents will be analyzed to establish ranges.

Each one of these steps will be examined in the following sections to describe them in further detail.

Critical parameters and input and output parameters defined.

In The Guidelines on General Principles of Process Validation, 15 MAY 1987, it states that:

The validity of acceptance specifications should be verified through testing and challenge of the product on a sound scientific basis during the initial development and production phase.1

It is important to determine which parameters in your process are critical to the final product. When determining these parameters and attributes a variety of personnel with different expertise should be utilized. Assembling a team of professionals is a starting point and this committee should be a multi-disciplined team including Quality, Validation, Systems Engineering, Facility Engineering, Pharmaceutical Sciences (or R&D), and Manufacturing. When determining the parameters and attributes which are critical, it is important to consider those which if they were not controlled or achieved, then the result would have an adverse effect on the product. A risk assessment should be performed to analyze what the risk is and what the results are if a specific parameter or attribute is not controlled or achieved (e.g. the resulting product would be flawed). Risk assessment is defined by The Ontario Ministry of Agriculture, Food and Rural Affairs as:

  1. the probability of the negative event occurring because of the identified hazard,
  2. the magnitude of the impact of the negative advent, and
  3. consideration of the uncertainty of the data used to assess the probability and the impact of the components. 2

Click on any image for larger view

Figure 2 is a list of general questions to consider when assessing risk while Figure 3 is an example of Fault Tree Analysis – a formal approach to evaluating risk, where a Top Level Event is observed and through questions and observations the cause of the event can be determined.

National Center for Drugs and Biologics and National Center for Devices and Radiological Health, “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY 1987.National Center for Drugs and Biologics and National Center for Devices and Radiological Health, “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY 1987.Ontario Ministry of Agriculture, Food and Rural Affairs (2000), Queen’s Printer for Ontario, Last Updated March 22, 2000; WEB:

Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Part Two of Three


A Statistically Valid Time Frame or Number of Batches

How large of a sample set is needed of previously recorded data to determine ranges that are truly representative of the process, and will the ranges be useful in the Validation effort and not set one up for failure? This is a difficult question to answer, and it is important to note that the batches selected should have no changes between them, thus be produced with the same processing conditions. The draft FDA Guidance for Industry, Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients from March of 1998 suggests that 10-30 consecutive batches be examined to assess process consistency. 1

This is a good target statistically because when selecting a sample size or population, the concept of Normality or Degree of Normality becomes important. As a general rule of thumb, the population should be large enough that the distribution of the samples within the population approaches a Normal or Gaussian distribution (one defined by a bell-shaped curve). Thus in theory if the samples are normally distributed then 99% of the samples will fall within the +/- 3 Standard Deviation (S.D.) range. 2 When considering this, one should think in the sense of Statistical Significance (the P-Value). The P-Value is the significance that a given sample will be indicative of the whole population. Thus at 99% (a P-Value of 0.01) then a given sample has 1% chance of falling outside the +/- 3 S.D. range or (assuming no relationship with other variables) the sample has a 99% statistical significance as representative of the population. Finally, when considering the central limit theorem, which has the underlying concept that as the sample size gets larger, the distribution of the population becomes more normal, then in general a sample size of 10 – 30 as the FDA suggests would have a high chance of being distributed normally.

The data used to establish the parameters must be extracted from controlled documents.
When the number of batches to review is selected, the next step is to determine from what documents the processing data will be extracted. Typically the range establishing data must be taken from approved and controlled documents (see the examples below).

Examples of Controlled Documents:

  • Batch Production Records (BPRs)
  • Quality Control (QC) Lab Reports
  • Limits establish by Licensure
  • Product License Agreement (PLA)
    – Biologic License Agreement (BLA)
    – New Drug Application (NDA) or Abbreviated (ANDA)
  • Product Release Specifications
  • Small scale bench studies simulating plant environment.

The data extracted from the controlled documents will be analyzed to establish ranges.
Having established where the data will be selected from, the data must then be analyzed for specific trends such to define ranges for the Process Validation Protocol’s acceptance criteria. This acceptance criteria will be what the “Actual” process data collected during the execution of the Protocol will be compared to, in order to verify its acceptance. This part is where much thought needs to be applied so that the acceptance criteria are not so tight that failure is eminent or so broad that the achievement of the criteria proves nothing. Listed below are general steps that can be incorporated to determine the analysis.

  1. Draw “X Charts” and analyze for outliersApply a model to determine Normality
  2. Determine the +/- 3 S.D. range and plot on Trend Chart
  3. Determine the Confidence Interval as compared to +/- 3 S.D.
  4. Process Capability Indices
  5. Recording the Maximum and Minimum
  6. Assign Acceptance Criteria Range and justify

Drawing Trend Charts

Trend Charts, also referred to as X-Charts, are a good way of plotting data points from a set of data where the target is the same metric (for example pH as measured at a specific point in the process). It is a matter of defining the X-axis by the number of samples and the Y-axis by the metric that is being used. As an example, the X-axis could be a list of the batches by batch number and the Y axis could be pH. Figure 4 is an example of a type of trend chart. This way the data is presented graphically and can be appreciated with respect to setting a range.

With the data plotted, one can quickly assess any visible trends in the data. Additionally one can no begin the task of applying statistics to the data. It is important to determine if there are outliers in the data. Outliers may exist and can usually be rationalized by adverse events in processing as long as they are reported appropriately. Outliers can also exist as samples that are “statistically insignificant.” As mentioned before, the P-Value is the significance that a given sample will be indicative of the whole population so that outliers would have a very low P-Value. One method for determining outliers is to use a box-plot where a box is drawn from a lower point (defined typically by the 25th percentile) to an upper point (typically the 75th percentile). The “H-spread” is defined as the distance between the upper and lower points. 3 Outliers are then determined to be any data that falls outside a predetermined multiplier of the H-spread. For example the lower outlier limit and upper outlier limit are defined as 4 times the H-Spread, anything above or below these limits is statistically insignificant and are outliers.

Apply a model to determine Normality

With the accumulated data plotted, the Degree of Normality should be investigated so that the data can be analyzed by the appropriate method. There are several models for determining the Degree of Normality; some common ones are the Kolmogorov-Smirnov test, Chi-Square goodness-of-fit test, and Shapiro-Wilks’ W test. 4 Once the Degree of Normality is determined a more appropriate statistical method can be applied for setting ranges. If the data is determined to be non-Normal than there are two approaches to evaluating the data. The first way is to apply a Nonparametric Statistical model (e.g. the Box-Cox Transformation 5 ), however, these tests are considered to be less powerful and less flexible in terms of the conclusions that they provide, so it is preferred to increase the sample size such that a normal distribution is approached. 5 If the data is determined to be Normal or the sample size is increased such that the data is distributed more normally, then the data can be better analyzed for it’s range characteristics.

Click on any image for larger view

Determine the +/– 3 SD range and plot on Trend Chart

The data having now been displayed graphically should be analyzed mathematically. This can be done by using simple statistics where the mean is determined as well as the standard deviation. The mean refers to the average of the samples in the population. The standard deviation is the measure of the variation in the population from the mean. If the distribution proves to be normal, as from our normality tests above or by selecting a large enough population such that the central limit theorem predicts the distribution to be normal, then it stands that 99% of the data will fall within the +/–3 SD range. Using our example from Figure 4, the data is analyzed for its mean and standard deviation using the displayed formulas in Figure 5. Once this is determined, the +/– 3 SD can be applied to the trend charts by drawing them as limits at their values. This graphically displays the data as it is applied per batch and how it fits within the statistical limit of +/– 3 SD (see Figure 6.)

  1. U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Guidance for Industry, Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients, March 1998, 36
  2. StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB:
  3. Rice Virtual Lab in Statistics (1993-2000), David M. Lane, HyperStat Online, Houston TX, WEB:
  4. StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB:
  5. Box, G.E.P., and Cox, D.R. (1964), “An Analysis of Transformations,” J. Roy. Stat. Soc., Ser. B.


Process Validation of Existing Processes and Systems Using Statistically Significant Retrospective Analysis – Parts Three of Three


Determine the Confidence Interval as compared to +/- 3S.D. With the +/- 3 S.D. ranges determined, it can be considered important to evaluate what confidence there is that the next data point will fall within this range. The rationale for determining this level is to justify that the +/- 3 S.D. range provides a confidence that 99% of the data is within that range. Similar to the +/- 3 S.D. range, the confidence interval is a range between which the next measurement would fall. This level is typically 99% or greater. Thus a 99% confidence interval means, “there is 99% insurance that the next value would be in the range.” To calculate a 99% Confidence Interval, one needs to consider the area under the standard normal curve, the mean, the standard deviation, and the population size. Determining this level can be done using the formula in Figure 7. The confidence interval can be added to our previous example and is displayed in Figure 8.

Figure 8: Trend Chart with +/- 3 S.D. and Confidence Interval.

Figure 7: Definition of Confidence Interval Formulas.

Following our example:

Confidence Interval (99.9%) = 6.23 ± snc ( 0.151202/(30) 1/2)

Confidence Interval (99.9%) = 6.95 to 6.02

The confidence interval at 99.9% is slightly more narrow than the +/- 3 S.D. range which follows the trend if the +/- 3 S.D. range provides that 99% of the data will be within the range if the data is normally distributed.

As can be seen in the trend charts, the data fits well within the +/- 3 S.D. range, and therefore the confidence level is very high that the next data point that is collected will be within this range. Therefore this range may be appropriate to use as acceptance criteria based on the statistics. If the confidence level was wider than the +/- 3 S.D., then the data would have to be analyzed such to investigate if there were errors in calculating the degree of normality, the +/- 3 S.D., the confidence level, the outliers, or errors in the sampling technique to show that it was not computational error.

Process Capability Factors

Another method to setting ranges to be used as acceptance criteria are the Process Capability Indices defined by:

CPu = (USL – µ) / s (or in our example 3 S.D.)

CPl = (µ – LSL) / s (or in our example 3 S.D.)


USL = Upper Specification Limit

LSL = Lower Specification Limit

An industry accepted standard CP would be 1.33.1 This would mean that only 0.003% of the testing results would be out of the specification or 99.997% would be within the specification. This is a similar concept to confidence level.

Recording the Maximum and Minimum

Recording the maximum and minimum values in the data is important because it is a quick way to see if the data is all within the +/- 3 S.D. range. Additionally, if the maximum and minimum are within the +/- 3 S.D. range, than there is an additional level of confidence since all of the data would be within the range. Lastly, the data may be determined to be non-normally distributed and in such case, confidence may predict to high a possibility for failure at the +/- 3 S.D. range so in the interim, the maximum, and minimum values can be selected to be the range until further data can be collected to define the range (this refers back to increasing the sample size in order to approach a more normal distribution).

Assign Acceptance Criteria Range and Justify

Using all of the above analysis techniques, knowledge of the process and agreement on by a cross-discipline committee, acceptance criteria ranges can be assigned for the critical parameters and attributes. A general course of action would be to start by recording all the data at a given point in a spreadsheet, calculating the mean, S.D., population size, +/- 3 S.D., 95% and 99% confidence intervals, plotting the trend charts with appropriate ranges, and then deciding on which range makes the best sense. When selecting the acceptance criteria, a cross-functional committee should be utilized with backgrounds of QA, Manufacturing, Validation, R&D, and Engineering present. The

ranges should be selected and justified by scientifically sound data and conclusions. The ranges should be within the PAR for the product, which means that if +/- 3 S.D. is selected, the range should be checked at the upper and lower limits to verify that acceptable product is prepared. This should be done prior to a final agreement on the range and incorporation into the validation protocol. A report should be written to document the ranges with the rationale for selecting them and the justification for determining the limits as well as any determination that the ranges are within the PAR. Additionally, those ranges which are not to be included should be discussed within the report to justify why they are not to be recorded. A process validation protocol should be prepared with theses ranges for acceptance criteria and the process should be run at a target within the acceptance criteria ranges at least three consecutive times using identical procedures to verify that the process is valid.


Since the ideal case of validating a process during its implementation does not always exist in the pharmaceutical, biopharmaceutical, biotechnology or medical device industries, it may be important to determine a way to validate these processes using historical data. The historical data can be found in a variety of places as long as it is approved (e.g. approved and completed BPRs or quality control release documents, etc). A cross-functional team should perform a risk assessment on the parameters and attributes to determine which ones would be included in the process validation. A range establishing study for the attributes and parameters should be performed to evaluate historical data and analyze the data set for the concepts of normality, variation (standard deviation), and confidence. With a high degree of confidence, acceptance criteria ranges should be set for each parameter and attribute and a process validation protocol should be written with the appropriate ranges. This protocol should be approved and executed at target settings within the acceptance criteria ranges, from the start of the manufacturing process to the finish using qualified equipment, approved SOPs, and trained operators. In a final report for the process validation, the degree to which the process is valid would be determined by the satisfaction of the approved acceptance criteria.


1. Box, G.E.P., and Cox, D.R. (1964), “An Analysis of Transformations,” J. Roy. Stat. Soc., Ser. B., 26, 211.

2. Kieffer, Robert and Torbeck, Lynn, (1998), Pharmaceutical Technology, (June), 66.

3. Lane, David M., Rice Virtual Lab in Statistics (1993-2000), HyperStat Online, Houston TX, WEB:

4. National Center for Drugs and Biologics and National Center for Devices and Radiological Health,(1987) “Guidelines on General Principles of Process Validation,” Rockville MD. 15 MAY.

5. Ontario Ministry of Agriculture, Food and Rural Affairs (2000), Queen’s Printer for Ontario, Last Updated March 22, 2000; Web:

6. StatSoft, Inc. (1999). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB:

7. U.S. Department of Helath and Human Service, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, Center for Veterinary Medicine, “Guidance for Industry: Manufacturing, Processing, or Holding Active Pharmaceutical Ingredients,” Rickville MD. March 1998, 36.


//////////Process Validation,  Existing Processes and Systems, Retrospective Analysis

Oct 142016



Inaugural ACS Industry Symposium, 11-12 November 2016 in Hyderabad, India

Recent Advances in Drug Development

Register Today for the ACS Symposium in India on Recent Advances in Drug Development

To view this email as a web page, go here.

Register now for the inaugural ACS Industry Symposium, 11-12 November 2016 in Hyderabad, India. Be sure to secure your seat today as rates will increase on 27 October!
The theme of the Symposium is Recent Advances in Drug Development. The event will feature lectures by the world’s leading researchers and experts in the pharma industry, including:

  • Dr. Peter Senter of Seattle Genetics
  • Dr. Jagath Reddy Junutula of Cellerant Therapeutics, Inc.
  • Dr. Ming-Wei Wang of the Shanghai Institute of Materia Medica, Chinese Academy of Sciences

This is an exclusive event being organized in partnership with Dr. Reddy’s Laboratories for pharma professionals throughout India. Space is limited so register today!

Please visit our website to learn more about the speakers and the program.

Register today to ensure your access to the ACS Industry Symposium. We look forward to seeing you in Hyderabad in November.

2540 Olentangy River Rd Columbus, OH 43202 US


/////// ACS Symposium, Recent Advances in Drug Development, 11-12 November 2016, Hyderabad, India, dr reddys, cas


Get every new post on this blog delivered to your Inbox.

Join other followers: