Capturing and Reporting Electronic Data - American Chemical Society

Information Technology, Covance Laboratories, Inc., Mail Code 31, 3301. Kinsman ... Moreover, the testing can identify situations where the software d...
1 downloads 16 Views 502KB Size
Chapter 11 32

User Testing Strategy for Millennium Chromatography Software Validation

Downloaded by UNIV LAVAL on July 12, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch011

Timothy J. Stachoviak Information Technology, Covance Laboratories, Inc., Mail Code 31, 3301 Kinsman Boulevard, Madison,WI53704

The complexity of chromatography acquisition and processing software makes exhaustive testing impractical. Proving the system meets the user-specified requirements is a manageable alternative to testing every combination of features of the application. Necessary data collection, processing, and presentation features can be extensively tested while instrument control, spectra libraries, and other unused features can be omitted.

User acceptance testing of scientific software is mostfrequentlyperformed for the purpose of satisfying a regulatory mandate. A more laudable and practical motive is for assurance that the results produced by the system are valid. Moreover, the testing can identify situations where the software does not meet the needs of the user. The form that the testing takes can be a time-consuming exhaustive trial of each software feature. A viable alternative is to selectively test the features that are needed in the installation. Selective testing will require the users and testers to correlate the software features with laboratory junctions and requirements. Choices must also be made on how to test the selected features. One approach is to set up cases where the systems would be expected to M l because a parameter is specified outside of the application limitations. Hits approach requires a significant amount of planing. In addition, it is redundant to unit

© 2002 American Chemical Society

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

75

Downloaded by UNIV LAVAL on July 12, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch011

76 testing performed by the software developers. A more reasonable approach is to use data and parameters that match the actual workings of the laboratory. The immediate benefit is that on the completion of testing the system will have been shown to work in the intended environment. The disadvantage that must be kept in mind is that as the environment changes one cannot assume that the system is going to work as expected. Additional testing may be required as new needs are identified. User acceptance testing is only one of several types of testing in the system validation process. There is testing done by software and hardware developers, testing that software is installed correctly, testing that hardware is functioning, and testing that other computer components (e.g. Printers, databases, networks, workstations, laboratory equipment) are performing as expected. User acceptance testing should attempt not to duplicate these efforts. Especially in the case of unit testing by the developers, incorporating the results of other testing will save time and effort. Testing must be completed before the system can be used in a regulated environment, but additional benefits are available if the testing is started before a commitment is made to a particular system. The system requirements will necessarily be defined early in the process. The requirements will be refined further as systems are less formally tested. The result should be more objective information to justify the final system selection.

Documents A chromatography data acquisition system should provide the basic functions of system security, acquisition configuration, chromatography acquisition, chromatography processing, and result reporting. The system requirement specification (SRS) is a listing of the details of these basic functions. The users of the software provide the requirements to the administrator of the system who incorporates them into the SRS. The SRS is the key document in successful User Acceptance Testing. The SRS provides a framework on which to build the test plan. If developed early in the process the SRS can be used as a system "shopping list." The SRS should be independent of the system being tested. The user acceptance test plan is a specification of what will be done. It will specify necessary setup conditions and any required input data. The plan contains the test log, which gives instructions for the tester to follow, defines expected outcomes, and provides for the tester to record actual outcomes and

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

77

observations. The plan must be rewritten to address the specific system being tested. Hie user acceptance test repeat addresses test failures and deviations from the plan. The administrator assesses the impact of each item, makes recommendations for use, and determines what future action is necessary. It is acceptable and not uncommon to have test failures. The system can still be used. The other key component of the report is a table listing every item in the SRS with a reference to where it was tested.

Downloaded by UNIV LAVAL on July 12, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch011

Actual Implementation Environment: Four types of laboratories in a Contract Research Organization (CRO) regulated by FDA 21CFR58 (GLP) and 21CFR11 (Electronic Records and Signatures) and by ISO9000. Existing Chromatography Systems: Microsoft DOS and 16-bit MSWindows applications, including Millennium 2020. Laboratory, quality assurance, and information technology users provided requirements to the Chromatography Data Acquisition System (DAS) team. The team generated a System Requirement Specification (SRS) and solicited proposals from vendors. Millennium was chosen before User Acceptance Testing was begun. The Application Administrator (AA) performed a vendor audit and obtained unit test results. The AA prepared a test plan based on the requirements of the SRS, but omitting the features tested by the vendor. The test plan comprised modules that tested the basic functions of security, configuration, acquisition, and processing and reporting by simulation of laboratory operations. Additional modules were needed to specifically test security and privilege assignment because they are not encountered in day-to-day laboratory operation. A final module acquired data in parallel with the existing data acquisition systems to prove the results were the same. The traceability matrix was created to document that each item of the SRS was verified in a validation test. Information technology personnel executed the tests in a validation environment designed to simulate the production environment as accurately as possible. Execution of the testing was recorded by Lotus ScreenCam and saved as executablefilesthat can be played back to review the workstation display of the actual test. 32

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.

78

Evaluation 32

Thefirstsuccess of the plan was to identify a memory leak in Millennium version 3.0. The decision was made to delay implementation until the leak was fixed. Testing on Millennium , version 3.05, detected additional program errors. ASCII formatted result table exports contained unexpected line breaks. Some user privileges that appeared to be independently settable depended on the setting of other privileges. Peak-fitting algorithms projected a higher than actual peak height for square waves. The Copy-to-Project dialog allowed users to see project names to which they did not have privilege. Some deficiencies were discovered after release to production. There was no way to assign a read-only privilege to view sample histories for auditors. It was possible to contrive to save a quantitated peak without saving the calibration that was used for the quantitation. When assigning users to groups the system would incorrectly match to a longer name that had the same initial substring. In some cases, curve statistics were not replaced when curves were recalculated. In some cases, updating custom calculation values required reprocessing of the chromatograms. Manually identifying an internal standard peak could lock-up the workstation. The converter for restoration of version 2 projects failed on a few projects. The evaluation of the success of the validation approach must answer the question of whether taking another approach would have caught the bugs and that the other approach would have not missed other bugs. The answer is most likely that no one would have insight to test for these cases. Most of them require a precise sequence of events to be followed to create the error. Others rely on the chance selection of the input data, so nothing short of infinite testing on all data would guarantee success. A more pragmatic measure of success is that several client audits of the Millennium validation have found it sound.

Downloaded by UNIV LAVAL on July 12, 2016 | http://pubs.acs.org Publication Date: August 1, 2002 | doi: 10.1021/bk-2002-0824.ch011

32

32

Garner et al.; Capturing and Reporting Electronic Data ACS Symposium Series; American Chemical Society: Washington, DC, 2002.