In previous articles, we looked at reasons to rethink and simplify the approach to Computer System Validation (CSV). This highlighted a system level ‘macro risk’ assessment using several key factors (including GAMP category and direct/indirect product impact) to determine an overall Risk Profile Score; and then a GAMP functional risk assessment approach at the requirement and design levels to determine the testing approach for each requirement/design element.
In this article we will take a more detailed look at the risk assessment process and how working with your vendor relates to critical thinking.
Take the example below of a requirement for a system to maintain all raw material master records.
A typical functional risk assessment would explicitly describe potential hazards, root cause of failure and mitigation controls.
Rather than explicitly describing these attributes, use them in the process as aspects to test for when developing the test cases for the requirement.
All this information can be combined and condensed into the single user/functional requirements document.
Note that:
This level of analysis on your requirements is best performed initially as part of requirements development / approval and can be reassessed closer to testing time to determine whether any risks have changed (see below).
Although the risk level is important (as it can be used to justify a testing approach), the process involved in discussing the hazards, failure modes and controls is far more important as this provides the uncovering of process/system knowledge to everyone involved.
The risk assessment itself is a ‘point in time’ assessment of the situation.
For example, take the test case in yellow above – “Ensure additions and changes to raw material definitions are approved by QA prior to release’.
If you know the software solution does this as part of the base software, and access is enforced as a QA group (you just need to add names to that group) then the likelihood of failure is probably Low and the detectability is High – making the risk ‘Low’.
If you know the software needs configuration to turn on the QA check, then the likelihood may be Medium – making the overall risk ‘Medium’.
If you know that the solution provider needs to write custom code to do this function, then the likelihood may be High – making the overall risk ‘High’.
As you are probably unsure of all this at the time of approving the requirements, you would define the conditions so that the risk is ‘High’.
By working with your vendor (if known) at the time of developing / approving the requirements, you can perform a more knowledgeable risk assessment; but as we can see below, as vendor information comes to hand at any time, this can change previously assessed risk levels.
By vendor knowledge, I mean their documentation (manuals and software development/testing documentation) and their thoughts on using the software to meet your requirements.
Depending upon the overall Risk Profile Score, a detailed design/configuration document may be required. The purpose of this is to map the functional requirements into how the software manages that requirement.
This may consist of both existing vendor documents and configuration documents for your specific instance. As before, although these documents are valuable for ongoing support, the process of the customer learning more about the software and the vendor learning more about the actual requirements is far more valuable in the longer term.
To provide process traceability, both the requirement reference and the risk level can be inherited in the design documentation at the relevant sections.
No additional separate Requirements Traceability Matrix is required.
A similar set of design test cases can be developed within the design documentation or simply reviewed if already performed by the vendor.
As part of a design review, justification for changing the design risk level based on product knowledge, pre-existing testing or project testing can filter back to the original risk assessment – justifying less formal testing in the Performance Qualification / User Acceptance Testing.
In a previous article we suggested that only high risk functions required a test script and/or some level of documented evidence and medium/low requirements could be justified with a noted pass/fail only.
The aim is to not explicitly repeat tests at multiple levels of testing – although, implicitly test actions will be repeated as steps need to be completed to perform other steps in a process.
In a way, CSA is about trusting the qualification and expertise of key project personnel without having to justify every step they take in the computerised system development process.
This needs to be justified with valid reasoning (‘Critical Thinking’), but when correctly implemented, can greatly reduce the burden that CSV has created to many companies.
Note that this is not a change to what GAMP has been saying within its guides – merely a refocus of effort. In fact, there is a GAMP CSA guide due for release later this year.
SeerPharma has successfully used the CSA approach and is helping companies to move towards this less burdensome approach to Computer Software Assurance.
Contact us if you would like to know more about CSA, and how these proposed changes may affect your organisation.