Category Archives: Validation

Oral Solid Dose – Critical Properties

Hello good people of the world! Today’s post is the first in a series covering considerations around the commissioning, qualification, and validation of facilities, systems, and equipment involved in the manufacture of oral solid dose (OSD) products. OSD is a wide-spread method of pharmaceutical delivery, including well known medicines such as aspirin, Viagra, and many antibiotics. Solid doses can take the form of powders, tablets, capsules, pills, lozenges, granules and more.

Here we’re going to cover the physical and chemical properties that should be considered in equipment design.

First, environmental factors:

  1. Temperature and Humidity: temperature and humidity should be controlled even if the product is not sensitive, as most processes are susceptible to flow issues in the extreme temperature and/or humidity ranges.
  2. Light: some OSD products are light (especially UV light) sensitive and must be protected from sunlight and even indoor light in some cases.
  3. Oxygen: some products may also be sensitive to oxygen exposure.

Second, process factors:

  1. Particle size and size distribution: powders inevitably have some variation in particle size that must be understood and controlled
  2. Particle shape: similarly to size, particles will have variation in shape
  3. Surface properties: are the particles smooth or rough? Do they stick together? Do they readily absorb moisture? Surface properties must be understood
  4. Particle strength: particles will break down under enough force. Particle strength must be understood and undue stress avoided in manufacturing processes.
  5. Density, porosity, and packing: how does a particle pack? Things like minimum bulk density, poured bulk density, and tapped bulk density should be understood.
  6. Cohesion in powders: related to surface properties, how to particles stick together? Magnetic, electrostatic, and intermolecular forces may be in play and should be understood.

What factors do you consider in your OSD manufacturing process?

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

Validation Program Tenets

Hello good people of the world! What are the overarching tenets that you go to when making decisions related to your validation program? The regulations and guidance from industry only go so far and you will be regularly tasked with situations unique to your program. How do you know what is the right way to go in the grey areas? I like to keep these tenets in mind:

  1. The manufacturing process should be the most complex process on the site. Reduce complexity everywhere else. Reduce the number of deliverables. Reduce the number of process steps.
  2. Requirements feed specifications feed test protocols. Remember that you should always be able to trace a test case to a requirement through the specifications.
  3. Compliance is not binary, you are accepting degrees of regulatory risk. Make sure you understand the risk and that you accept it.
  4. Good Manufacturing Practices are not just from the CFRs. World-wide best practices need to be considered and applied where applicable.
  5. It’s all about documentation. If it’s not documented it didn’t happen. Create a logical narrative, and you’re already mostly there.
  6. Our primary purpose is to create documentation for agencies. Take any kind of writing class, and one of the first things you’ll learn is: know who your audience is and write for them. While it’s great the validation documentation can be used for commissioning, process improvement, etc. that must not come at the cost of it’s primary purpose.

What are some of your go-to tenets?  Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

PLC/HMI IOQ – What to Test?

PLC

Hello good people of the world! Today’s post is on initial control system Installation and Operational Qualification (IOQ) of a simple system consisting of an Human/Machine Interface (HMI), Programmable Logic Controller (PLC), and any number of end devices (valves, pumps, sensors, etc.). The question is what should be tested?

Obviously there’s a ton of guidance out there (see e.g.: GAMP) that will have a lot more detail than this post. The purpose here is to list at a high level the tests that could be expected. So let’s get started!

Installation Qualification
IQ can be its own protocol or combined with OQ in an IOQ for cases without a ton of complexity. IQ is supposed to verify the installation of hardware, software, and any peripherals. You also want to check what documentation is available/applicable here. IQ tests may include:

  • Documentation Verification (e.g. SOPs, EREC/ESIG assessment, operating/maintenance manuals, panel and electrical drawings, etc.)
  • Hardware Verification: verify the make and model of major components at a minimum
  • Software Verification: verify/record software versions. You’ve got to know what you’ll be OQ’ing!
  • Configuration Verification: verify any hardware and/or software configuration. This could be two tests, one for hardware, one for software.
  • Loop Check Verification: verify loop checks are performed.
  • Alarm Configuration Verification: ideally alarms a setup in such a way that you don’t have to functionality test them all!
  • Any other critical installation items

Operational Qualification
OQ is the meat of your control qualification. Here you want to test critical functions, that hopefully you have identified earlier (see here for one approach). OQ may test:

  • Interlock Verification including e-stops. A lot of interlocks are safety/business related, but they’re often included in OQ due to how critical they are.
  • Functional Alarm Verification – be sure to include data loss/communication alarms
  • HMI Navigation and Layout Verification
  • Restart/Recovery Verification
  • Sequence of Operations Verification

What kinds of testing are you sure to cover in your control system IOQ protocols? Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

ISPE’s Commissioning and Qualification Guide Second Edition

Hello good people of the world! Today’s post covers ISPE’s release of the second edition of their commissioning and qualification guide. This is volume 5 of the baseline guides. The first edition was first released way back in March 2001, so we should expect this to be a significant revision. Please note this guide is not available for free.

One very nice thing about this second edition is that it not only updates the first edition of the volume 5 guide, but incorporates scope from two other now-outdated guides as well: “Science and Risk-Based Approach for the Delivery of Facilities, Systems, and Equipment” and “Applied Risk Management for Commissioning and Qualification.” So if you have these in your library, you can safely archive them.

It’s always important to note that industry guides such as this one do not constitute regulations and are not required to be followed. It is often the case however that best practices documented in guides become industry standard, and then set expectations for regulators. Per the guide, it is intended to comply with EU GMP Annex 15FDA Guidance on Process Validation, and ICH Q9.

The table of contents shows the following sections:

  1. Introduction
  2. User Requirements Specification
  3. System Classification
  4. System Risk Assessment
  5. Design Review and Design Qualification
  6. C&Q Planning
  7. C&Q Testing and Documentation
  8. Acceptance and Release
  9. Periodic Review
  10. Vendor Assessment for C&Q Documentation Purposes
  11. Engineering Quality Process
  12. Change Management
  13. Good Documentation Practice for C&Q
  14. Strategies for Implementation of Science and Risk-Based C&Q Process

And the following appendices:

  1. Regulatory Basis
  2. User Requirements Specification Example
  3. System Classification Form Example
  4. Direct Impact System Examples
  5. System Risk Assessment Example
  6. Design Review/Design Qualification Examples
  7. Supporting Plans
  8. System Start-Up Examples
  9. Discrepancy Form Example
  10. Qualification Summary Report Examples
  11. Periodic Review Example
  12. Periodic Review for Controlled Temperature Chambers
  13. Vendor Assessment Tool Example
  14. Organizational Maturity Assessment Example
  15. Approach to Qualifying Legacy Systems or Systems with Inadequate Qualification
  16. References
  17. Glossary

You’ll have to purchase the guide to get all the details, but below are some highlights that stuck out to me:

  • This second edition introduces the term Critical Design Elements (CDEs). CDEs are defined as “design functions or features of an engineered system that are necessary to consistently manufacture products with the desired quality attributes.”
  • Concepts that were removed from this edition of the guide include Component Criticality Assessment, Enhanced Commissioning, Enhanced Design Review, Enhanced Document, Indirect Impact (systems are either direct impact or not direct impact now), and the V-Model.
  • A Direct Impact system is defined as a system that directly impacts product CQAs, or directly impacts the quality of the product delivered by a critical utility system. All other systems are considered to be not direct impact. An example included in section 3 demonstrates the previously categorized “indirect impact” systems would become not direct impact systems and would be commissioned only, although the commissioning for these system may be more robust than a purely “no impact” system. The guide provides an eight (8) question process for determining if a system is direct impact.
  • System boundaries should be marked on design drawings.
  • Inputs to the URS should include: CQAs, CPPs, regulatory, organization quality, business, data integrity and storage, alarm, automation, and health, safety, and environmental requirements, and engineering specifications and industry standards. The example URS template does include a classification of each requirement (e.g. business, safety, quality).
  • A system risk assessment is performed to identify CDEs and controls required to mitigate risks. Standard off-the-shelf systems typically do not require a risk assessment. Risk levels are defined as low, medium, and high and the risk assessment approach is not a typical FMECA process. Instead each CQA at each step gets one entry on how the CQA can be impacted, what are the design controls around that CQA and any alarm or procedural controls to mitigate risk. The residual risk post-controls is includes as low, medium, or high.
  • Design Qualification looks somewhat informal 0=- no DQ protocol, but a DQ report that summarizes other documents (URS, SIA) and design review meetings.
  • A C&Q plan should include clear scope, the execution strategy, documentation expected for each system (URS, FAT, SAT, IOQ, SOPs, etc.), and roles and responsibilities (e.g. approval matrix).
  • The discrepancy form has closure signatures only (no pre-implementation signatures)
  • For legacy systems without adequate C&Q documentation, focus should be on identifying product and process user requirements including Critical Quality Attributes (CQAs) and Critical Process Parameters (CPPs), and then the Critical Design Elements (CDEs) that affect them. It is necessary to confirm that accurate drawings exist, that maintenance files are up-to-date, and there is test evidence to support changes since commissioning. A risk-based approach can be used to qualify the system in the absence of typical C&Q documentation.

Do you use the ISPE guides for your C&Q approach? Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

 

Validation Project Plans

develop-project-plan-1200x480

Hello good people of the world! Today’s post is about Validation Project Plans, which is a specific type of project plan for projects in the pharmaceutical, biotechnology, and medical device regulated industries. This post covers Validation Project Plans for pharmaceutical/biotechnology industries in particular.

Often I’ve see Validation Project Plans contain a lot of fluff but little meat, making them of less value to the project team. A good project plan clearly documents the following, at a minimum:

  1. What facilities, systems, and equipment are in scope of the plan
  2. What are the expected activities and deliverables
  3. Who is responsible for what
  4. What is the validation approach and rationale for that approach
  5. What happens after the validation scope covered in the plan is completed (i.e. ongoing requirements)

Note I do not include project cost or schedule in a project plan, because these are often changing rapidly and should be maintained in a less controlled, more flexible manner, e.g. with scheduling software for a schedule.

The plan itself should be controlled (i.e. approved and revision controlled) as soon as possible in the project but early enough so that scope will not change (too much).

Additional things to think about when drafting your plan:

  1.  Commissioning versus Qualification versus Validation. If your project has multiple phases (and any decent-sized project should), be sure to clearly state responsibilities and deliverables at each stage.
  2. Include references to regulations, industry guidance, and site procedures that govern your plan. Make it clear to everyone who reads the plan what framework you are working inside.
  3. The purpose and scope of the document should be clear and up front.
  4. Get buy-in from all functional groups by having them approve the document.
  5. Like all controlled documents, the plan should have version/revision history.
  6. Use tables to clearly present information.

I put together a quick template here:

Validation Project Plan Template MWV

What do you feel is necessary in a Validation Project Plan? Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

 

 

Basic Components of a Test Form

Hello good people of the world! Today’s post is a short one on what are the basic components of a test form. In Validation, you’re going to record a lot of data, and you want this data to be well organized and easily understood. Here are the basic components I think every test form should have:

  1. Numbering! Each step should have a unique number so that it is easily identifiable and easy to reference elsewhere.
  2. A Title! What is this test all about? A short description should be provide.
  3. Purpose! What is the purpose of the test? Make it clear.
  4. Verification Steps! Clearly define what steps need to be performed.
  5. Expected Results! Clearly define what the expected results are. Does every step need an expected result? Every step can have one, so include it.
  6. Actual Results! This is where the actual data is collected. The actual results can be recorded exactly as the expected results are stated to avoid any confusion.
  7. Pass/Fail! Did the step pass or fail? This will quickly tell you. Also a good place to reference any comments.
  8. Initials/Date! In order for the data to be attributable, initial/date uniquely identifying the test executer must be included for each step.

What basic components do you include on your test forms? Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!

What NOT to Re-qualify

autoclave
Hello good people of the world! Today’s post is on what NOT to include in requalification/revalidation. I was recently on a site that had a five (5) year requalification requirement for sterilizers per a site SOP, which sounds reasonable (continuous monitoring would be better). But then I noted they included in their requalification requirements a re-execution of the entire initial controls system IOQ! The requalification included verification of:

  • Hardware/software installation
  • E-stop, guarding, and door interlocks
  • Restart and recovery
  • Recipe management
  • Temperature, pressure, and time control
  • Communication
  • Security

And it was expected that this would be done every five years! It just so happened that in 2014 they paid a contractor to do the work, who sadly did not help the site out by letting them know the wastefulness of such an endeavor. This is an egregious example of resource misuse and not understanding the expectations of a validation program/taking a risk-based approach. The point of requalification/revalidation is to look for drift in processes, not blindly repeat testing already performed.

What misunderstanding-of-validation-expectation horror stories do you have? Comment below.

Like this MWV (Mike Williamson Validation) blog post? Be sure to like, share, and subscribe!