Computer Validation in a Regulatory Environment - American

The US Food and Drug Administration (FDA) defines. Validation as follows: "Establishing ... Thus, computer system/software "validation" may be regarde...
1 downloads 8 Views 728KB Size
Chapter 16 Computer Validation in a Regulatory Environment 1

2

Ann M. Speaker and Sharon M. McKilligin 1

Covance Clinical Research Unit, 309 West Washington, Madison,WI53703 Covance Laboratories, Inc., 3301 Kinsman Boulevard, Madison, WI 53704

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016

2

The US Food and Drug Administration (FDA) defines Validation as follows: "Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes (1).

Thus, computer system/software "validation" may be regarded as a series of quality management steps taken to determine whether computer systems, either developed 'in*house' or supplied by a vendor, are able to meet the demands placed on them in terms of functionality, and that they will continue to operate consistently and reliably in a production environment. Computer systems requiring formal validation are divided into two broad categories, 'compliance-critical' and 'business-critical': •

Compliance critical systems are those which generate or manipulate data incorporated in Clinical Study Reports, Case Report Forms, or any other form of 'raw' data. Such systems must be validated in order to fulfill the regulatory requirement for "Clinical trial data (which) are credible" (2). Failure to perform adequate validation may result in regulatory non-

© 2002 American Chemical Society

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

117

118

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016



acceptance and lack of confidence in data generated. In a regulatory setting, effective validation ensures integrity of data from source to report/submission. Business-critical systems are those which are essential for the continued smooth operation of a clinical research unit, and could include databases, network operating systems and/or specialized software applications which support business activities.

Computer systems that are considered not to require formal validation should be subject to some User Acceptance Testing. The User should test the functionality of the system to ensure it complies with the needs of the User and the User Department. Acceptability of the system should be documented and the system released as for formally validated systems. The benefits of software validation include, but are not limited to, assuring product quality for software automated operations, lessening risk to users, decreasing failure rates and reducing liability. Software validation can increase the usability and reliability of a system. Software validation includes all of the verification and testing activities conducted throughout the Software Development Life Cycle (SDLC). Proper validation of software includes the planning, execution, analysis and documentation of appropriate validation activities and tasks. It should begin when design planning and design input begin and continue until the software product is no longer used. Activities in a typical software life cycle include: • • • • • • • •

Management Requirements Design Implementation Integration and Test Installation Operation and Support Maintenance

There are several distinct phases of the SDLC which will be explained in more detail; however, the basic foundation upon which the entire validation process is based is the existence of pre-determined and documented software requirement specifications. The correctness and completeness of the system requirements should be addressed as part of the process, demonstrating in the

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

119

end that all of the software requirements have been met, and that all software requirements are traceable to the system requirements.

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016

Initiation Phase The initiation phase of a project begins when a request is made for a new automated function, or a software change is needed to correct a problem, or to enhance a software function. The request must be documented by the end user and submitted to the Information Technologies (IT) Department. New software development or purchase, enhancements or major defect corrections are initiated through the use of a Software Service Request (SSR), The SSR should identify all basic functions required by the user (e.g. primary inputs, calculations and reports). The SSR initiates the tracking process and is used to monitor the phases of the SDLC. A risk analysis is prepared to identify the regulatory impact (e.g., data integrity, security), and an assessment of the business risk (e.g., reliance on the system, protection of assets) will be conducted and documented. This analysis shall take into account: • • • • •

Vender history Purpose of the application Amount of data collected by the system Affect on current procedures, systems and applications Cross departmental use

The SSR is the initial formal request for software development or software modification, which initiates the concept phase. Management then weighs the business risks and decides whether or not to authorize development or procurement.

Requirements Phase The end-user requirements are documented in a Software Requirements Specification (SRS). This document should, when applicable contain the following:

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

120



Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016



An introduction section outlining die purpose, scope, definitions, acronyms and abbreviations, references and an overview A general description of the product perspective and product functions including but not limited to: Regulatory Policies Hardware Limitations Interfaces to other Applications Parallel Operation Audit Function Control Functions Higher-Order Language Requirements Communication Protocols Criticality of the application Safety and Security Considerations

• • •

All inputs and outputs of the system Performance requirements(e.g. data throughput, reliability, timing etc.) What constitutes an error and how errors should be handled

The SRS should be reviewed for adequacy, technical feasibility and completeness, then submitted to the Quality Assurance Unit (QAU) for review. A project management plan is developed in the Initiation Phase and should identify the specific tasks involved in the project and the timeline for completion.

Design Phase In the Design Phase, software requirements must be translated into a logical and physical representation of the software to be implemented. The SRS is used by the programming staff to develop a Software Design Description (SDD) and, where applicable, a series of prototypes. The SDD describes data structures, information flow, control logic, parameters to be measured or recorded, error and alarm measures, security measures and predetermined criteria for acceptance. Adherence to internal programming standards ensures consistency, and technical adequacy is evaluated following formal interface, source code and database review.

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

121

At the end of design activity, a formal design review should be conducted to verify that the design is correct, consistent, complete, accurate and testable.

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016

Development Phase The next phase is the Development Phase, where detailed design specifications are implemented as source code. Code comments should provide useful and descriptive information for a module, including expected inputs and outputs, variables referenced, expected data types and operations to be performed. Source code should be evaluated to verify its compliance with the corresponding detailed design specifications. Source code evaluations are often implemented as code inspections and code walkthroughs. Appropriate documentation of the performance of source code evaluations should be maintained as part of the validation information. Documentation is crucial in the Development Phase and includes all enduser manuals, unit testing summary report, a user acceptance test plan, results of the database design, results of the source code review, and a traceability analysis. A source code traceability analysis is used to verify that all code is linked to established specifications and established test procedures. A source code traceability analysis should be conducted and documented to: • • • •

Verify that each element of the software design specification has been implemented in code; Verify that modules and functions implemented in code can be traced back to an element in the software design specification; Verify that tests for modules and functions can be traced back to an element in the software design specification; and Verify that tests for modules and functions can be traced back to source code for the same modules and functions

Test Phase The next and very critical phase is the Test Phase. Software testing objectives include demonstration of compliance with all software specifications, and production of evidence which provides confidence that defects which may lead to errors or problems have all been identified and removed.

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

122

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016

A software testing strategy designed to find software defects will produce far different results than a strategy designed to prove that the software works. A complete software testing program uses both strategies to accomplish these objectives. Test plans are created during the prior Development phase and should include a description of all tests to be run, the purpose of each test, the data sets to be used, identification of each input, and the expected output. The items included in the testing should provide a thorough method for evaluation of the following elements: • • • • • • •

System security Data integrity (storage, retrieval, audit trail) Measurement accuracy and reproducibility Calculation accuracy and reproducibility Stress testing by identifying factors that may cause a system failure (e.g., boundary limits, negative values, inappropriate characters) Completeness and utility of reporting formats Traceability

Test plans should identify the necessary levels and extent of testing, as well as clear, pre-determined acceptance criteria. The test plan should also include detailed instructions for testing, environment, data to be used and specific criteria for acceptance. Test results should be documented to allow objective pass/fail decisions to be reached. Errors detected during testing should be logged, classified, reviewed and resolved prior to release of the software. The testing must be conducted within a simulated production environment or environment identical to that which will be used in production. A test log is used to record the actual results of the testing, then a test report is generated to present the results of tests performed and described in the plan. It will present conclusions regarding the success of each test based on the criteria specified. The release of software for testing and use in a production environment will be done under configuration management by the system administrator. Configuration management ensures strict version control of the software.

Installation Phase Prior to placing a system in production, the user acceptance test plan and test report must be completed, reviewed, signed off and archived. Corresponding SOP and user manuals have been completed, staff has been trained in the use of the software and training is documented, a change control log has been created and will be maintained. A qualification plan has been

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

123

written and appropriate workstations will be qualified prior to placing the software into production. When all users have been trained and notified of the production release date, IT staff will place the system into production.

Downloaded by UNIV LAVAL on July 11, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch016

Operations Phase All modifications, enhancements or additions to existing software or its operating environment are design changes and are subject to design control provisions. The validation activities associated with each software change should be documented as part of the record of that change. All ongoing changes must be tracked for administration and control of validated systems. These types of documentation include error logs, change control logs, qualification logs and activity logs. The purpose of the Software Development Life Cycle is to maintain software in a validated state.

References 1. FDA Glossary, 1995. 2. ICH Good Clinical Practices, Section 1, 1997.

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.