Skip to content

Dynamic Blog

The Bottom Line on Medical Software Validation

Software validation for medical devices is governed by FDA regulations that are, unfortunately, anything but clear. While the need for software validation in medical equipment is self-evident, where a healthcare organization or device manufacturer goes from there is a bit more obscure. This is particularly evident when considering that these FDA regulations are nearly two dozen years old, dating back to a time when the newest IT innovation introduced that year was the 56Kbps modem.

 

What Does the FDA Require?

For guidance to be equally applicable in 2021 as it was in 1997, it must be incredibly broad. In its most recent guidelines on Electronic Records and Electronic Signatures (updated for modern standards as recently as 2003), the FDA states:

“We suggest that your decision to validate computerized systems, and the extent of the validation, take into account the impact the systems have on your ability to meet predicate rule requirements. You should also consider the impact those systems might have on the accuracy, reliability, integrity, availability, and authenticity of required records and signatures. Even if there is no predicate rule requirement to validate a system, in some instances it may still be important to validate the system.”

The assumption contained in this passage is that computerized systems/software should be able to stand up to the risk involved in its operation. This requires accurate risk analysis, and appropriately identifying risk requires understanding (a) how the software is designed to be used and (b) how the software is actually used. This distinction is the difference between software verification and software validation.

 

Software Validation Versus Software Verification

In the Design Control Guidance linked above, the FDA defines these two terms as follows:

  • “Verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled.”
  • “Validation means confirmation by examination and provision of objective evidence that the particular requirements for specific intended use can be consistently fulfilled.”

The difference between these statements is razor-thin and relies on two words: “intended use.” Verification confirms that software meets its design requirements and that it can properly and consistently execute the tasks it was programmed to perform. Validation ensures that the software meets the needs for which it is actually used, not necessarily for which it was originally intended—and this is determined by the end-user.

While a manufacturer can perform software verification without coordination with any particular client, only through effective communication can software validation be performed. How a client uses the software determines its risk profile, and proper risk analysis will identify that.

 

What Problems Does the Industry Face?

Two issues plague software validation in the medical device industry. The first is relying exclusively on medical device manufacturer’s verification and mistaking it for validation, which can introduce more risk. Without end-user input, manufacturers can only verify that software works as it was programmed to do—not as it might be used. This opens the potential for unidentified and unknown risks to remain untested and unaddressed.

The second problem is erring on the side of caution and conducting far too much validation than is necessary. Rather than face the potential of not addressing potential risk, many see the better option as testing every conceivable risk factor exhaustively.

 

How Can Device Manufacturers and Users Overcome These Issues?

Device manufacturers/software programmers should begin by heavily incorporating actual user input at the design stage and continuing through final delivery. The more closely software aligns with real use scenarios from live clients, the more closely verification and validation will mirror each other.

End users should have an accurate grasp of how their devices and software are used before moving into the bidding stage. In larger healthcare systems, it isn’t uncommon for purchasing departments to rely more heavily on sales descriptions from manufacturers (based on verification) than they are to dig into the nitty-gritty details of how their own healthcare professionals use a particular system (an absolute requirement for validation). Once these specific needs are identified, end-users can determine the level of risk a specific device faces based on its use, and that risk profile should drive final validation efforts.

When this process isn’t well understood, manufacturers and end-users alike err on the side of caution. Facilitated by vague FDA guidelines, validation processes often delve into every possible risk rather than only those that are realistic or plausible, driving delays, increasing costs, and requiring healthcare providers to operate with outdated software for longer than is necessary.

However, when both parties understand the distinctions between verification and validation, communicate effectively about actual use cases, and conduct validation based on realistic risk analyses, software can be appropriately created, accurately tested, and brought to market quickly.