FDA classically has defined the requirements for validation under 21 CFR 820 and 210/211 regulations as a comprehensive testing process where all systems are given thorough examination and tested under equal weight, complete with an exhaustive evaluation process. Recent guidance and initiatives by FDA (Process Validation: General Principles and Practices) and ICH (Q11: DEVELOPMENT AND MANUFACTURE OF DRUG SUBSTANCES) have provided a streamlined risk based approach under an updated life cycle management methodology.
Under this scenario, a new definition of validation has emerged, best described by FDA as “the collection and evaluation of data, from the process design stage through production, which establishes scientific evidence that a process is capable of consistently delivering quality products.” This is in contrast to the classical definition as perhaps best emphasized in the device regulations under 21 CFR 820.75:
“Where the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures.”
What this means is that a risk-based life cycle management approach with relevant scientific rationale and evidence can be used in lieu of a traditional top-down comprehensive approach. Many of us remember the golden rule of validation; triplicate test runs as an output of this classic approach. It didn’t matter the complexity or simplicity of the system, just always apply the test in triplicate.
Essentially what FDA and ICH are now saying is that if you can justify a different test plan with a risk-based approach, it is fine with them. The end result is that many validation processes can be streamlined and production delays eliminated. Below is essentially a discussion on the ‘best practices” approach I have successfully used to meet these updated FDA and ICH guidance’s.
Whether the intended validation effort is for equipment, processes or software, it is highly recommended that a User Requirement Speciation (URS) be written. This facilitates a starting point and traceability to ensure that basic functions are established. These basic functions will be used later for assessing risks. Software validation typically also has a Functional Requirement Specifications (FRS) that follows the URS in a logical, traceable way. The FRS shows the way the software post-configuration will meet the requirements of the URS.
A risk assessment follows the URS/FRS process. Utilizing the risk management methodologies of ISO 14971, it is imperative to establish the acceptance criteria and risk levels before applying a risk assessment to the functional processes developed in the URS/FRS.
The best practices approach developed below is a three level system with low (green) medium (yellow) and high (red) risk categories. These can be characterized as follows:
From this definition, a standard risk matrix* has been built, as shown below:
* Sample matrix – your organization MUST develop (and justify) your own criteria.
Where the horizontal columns are for safety/severity/quality, and the vertical rows represent probability/frequency/detectability.
After developing the acceptance criteria the process proceeds as follows:
Once risk assessments for individual functional items from the URS have been determined, a validation approach for each functional category can be assembled. The following best practice approach outlines three types of validations that can be utilized with a risk based process.
These criteria are then applied to the table of functional items and the output is the validation test plan, as described in the section below.
The new guidances for process validation have established that “the collection and evaluation of data, from the process design stage through production, which establishes scientific evidence that a process is capable of consistently delivering quality products”. This has resulted in validation being split into three stages:
Stage 1: Process Design – The commercial process based on experience gained from development and scale-up.
Stage 2: Process Qualification – The reproducible, commercial scale is confirmed on the basis of process design.
Stage 3: Continued Process Verification – To show that the process is in a state of control during routine production.
Manufacturers must to prove with a high degree of assurance that the product can be manufactured according to the quality attributes before a batch is placed on the market. For this purpose, data from a lab, scale-up and pilot scale are meant to be used. The data are explicitly meant to cover conditions involving a great range of process variation.
The manufacturer must:
This stage is about building and capturing process knowledge and understanding; the manufacturing process is meant to be defined and tested, which will then be reflected in the manufacturing and testing documentation. Earlier development stages do not have to be conducted under cGMP. The basis should be sound scientific methods and principles, including Good Documentation Practice. There is no regulatory expectation that the process be developed and tested until it fails, but a combination of conditions involving a high process risk should be known. In order to achieve this level of process understanding, the implementation of Design of Experiments (DOE) in connection with risk analysis tools are recommended. Other methods, such as classical laboratory tests, are also considered acceptable. Adequate documentation of the process understanding based on rationale is essential. Establishing a strategy for process control and understanding are considered to be the basis for this stage.
This stage proves that the process design is suitable for reproducibly manufacturing commercial batches. This stage contains two steps: qualification activities regarding facilities and equipment, and performance qualification (PQ). This stage encompasses the activities that are currently summarized under process validation; qualified equipment is used to demonstrate the process can create a product in conformity with the specifications. The terms Design Qualification (DQ), Installation Qualification (IQ) and Operational Qualification (OQ) are no longer described as part of the qualification. They are, however, still conceptually used within the validation plan. The plan should cover the following items:
The final stage is to keep the validated state of the process current during routine production. The manufacturer is required to establish a system to detect unplanned process variations. Data are meant to be evaluated accordingly (in-process) so that the process does not get out of control. The data must be statistically trended, and the analysis be done by a qualified person. These evaluations are meant to be reviewed by the quality unit in order to detect changes in the process (alert limits) at an early stage and to be able to implement process improvements. Unexpected process changes can also occur, even in a well-developed process. Here, the guidance recommends “that the manufacturer use quantitative, statistical methods whenever feasible” in order to identify and investigate for root cause. At the beginning of routine production, the guidance recommends the same scope and frequency of monitoring activities and sampling occurs as in the process qualification stage until enough data has been collected.
Analysis from complaints, OOS results, deviations and non-conformances can also give data/trends regarding process variability. Employees on the production line and in quality assurance should be encouraged to give feedback on the process performance. Operator errors should also be tracked in order to check if training measures are appropriate.
Finally, these data sets can be used to develop process improvements. The changes however, may only be implemented in a structured way and with the final approval by quality assurance and potential re-validation in the process qualification stage.