Clinical informatics

Department of Pathology and Laboratory Medicine, East Carolina University School of Medicine, Greenville, North Carolina 27858. The term clinical info...
2 downloads 0 Views 662KB Size
Clinical Informatics Steven C. Kaunierczak*and Paul 0. Catrou Department of Pathology and Laboratory Medicine, East Carolina University School of Medicine, Greenville, North Carolina 27858

The term clinical informatics covers a broad range of topics related to the management of information to support patient care, medical research, and education. More specific to clinical chemistry, the term can be used to express the acquisition, collation, evaluation, and interpretation of clinical chemistry data. In this issue, our intent is to expand on one of these topics-the evaluation of clinical laboratory tests in terms of their clinical diagnostic utility. Our focus is to introduce the reader to the various approaches that may be employed in evaluating the diagnostic utility of clinical laboratory data. For those individuals desiring a more in-depth treatment of this subject matter, we have cited literature to which the reader can refer. For years, clinicians have sought laboratory tests that helped classify patients into diagnostic disease categories, the purpose of which is to help guide therapy and estimate prognosis. But, this classification process is not simple. Signs and symptoms are usually not disease specific, patients may have more than one disease, and results of laboratory tests from healthy and diseased individuals often overlap. Other confounding factors include disease stage or severity and changing disease definitions. The latter is due to our increasing knowledge and understanding of disease processes. The reader is referred to Weinstein (11)for a textbook treatment and to Linnet (12) for an earlier review of our topic. For purposes of this review, we divide methodologies into the following categories: probabilistic, regression and discriminant analysis, expert systems, and neural networks. Although all of these methods have been used to assess a test‘s diagnostic ability, it appears that no one method provides a standard index that can be adopted by all researchers. The current status of information theory as applied to the diagnostic process will also be covered since the concept is very important in providing cost-effective, quality patient care. PROBABILISTIC APPROACHES

For several years the 2 by 2 contingency table along with the derived values for sensitivity, specificity, and predictive value have been used to describe a test’s diagnostic ability. However, these simple tools have not been found to provide a comprehensive and accurate measure of a test’s diagnostic ability. These measures are affected by disease prevalence, decision criterion, and spectrum bias. The effect of disease prevalence has been addressed by several authors and recently in terms of the information content of test (13). The problem of selecting an appropriate decision criterion, or cutoff point, has been addressed by the application of the differential positive rate (14) and receiver operator characteristic (ROC) curves (15). A further refinement of this latter technique has been the addition of the calculation of confidence bounds for these curves (16,17).Other authors have described techniques to estimate an ROC curve based on only one pair of sensitivity and specificityvalues (18). These authors also discuss the impact of “verification bias” on ROC curves. Additionally, ROC analysis has been integrated with information theory to allow the comparison of a specific test‘s performance with that of an optimal or perfect test v9). Finally, variation in estimates of a test’s sensitivity, specificity, and predictive value may be due to

“spectrum bias”. This phenomenon is related to the makeup of the sample population used to derive these indexes. The selected sample may not be truly representative of the parent population in terms of disease prevalence and severity. Likelihood ratios have been proposed as an index that can readily be applied in day-today clinical decision making (110,111). Recent work using stratified likelihood ratios in combination with ROC analysis may improve the utility of likelihood ratios by taking the magnitude of an analyte change into consideration (112). REGRESSION AND DISCRIMINANT ANALYSIS

The techniques of linear regression, logistic regression, and discriminant analysis, as well as their multivariate (test) counterparts, have been used to study tests. Textbook treatments of these topics are readily available (113,114). Linear regression describes the relationship between two variables expressed on ratio scales. But typically, patients are classified as having, or not having the disease of interest. Hence the dependent, target, or outcome variable is expressed by use of a nominal scale. To apply the technique of linear regression, some researchers have chosen to assign a value 0 for no disease and a value of 1to indicate presence of disease. However, other researchers have not adopted this technique, and few reports currently use it. When multiple tests are evaluated in parallel, the multivariate linear regression approach has been used. Finally, when the multivariate case was studied, stepwise regression techniques were used (115,116). These stepwise techniques attempt to evaluate the relative importance of each of the multiple laboratory tests. As mentioned above, often the classificationof patients is done using a nominal scale. The techniques of logistic regression are becoming more popular (117,118). These techniques overcome the ratio vs nominal problem by having the dependent variable expressed as the logit, Le., the logarithm of the odds of an event (disease). The independent variables, or test results, are still expressed in ratio terms and are combined in a linear equation. Multiple logistic regression is used to handle multiple tests (119). As with linear regression, stepwise techniques can be employed to estimate the relative importance of each of the multiple variables (120). In discriminant analysis techniques, variables are combined in a set of equations called discriminant functions. The results of these equations, discriminant scores, are used to classify the subjects into one of two or more groups. A relatively simple, lucid description of this technique is available (117). The more mathematically inclined reader is referred elsewhere (121). This technique has been applied to a variety of clinical situations (122126). As with regression techniques, stepwise approaches have been used (J27-]29). Modifications of linear discriminantanalysis such as quadratic and logistic discriminant analysis may be employed depending on the particular laboratory test under study (130). Finally, M e r adaptations (131) and controversy (132,133) exist pertaining to these techniques. Analytical Chemisty, Vol. 67, No. 12, June 15, 1995

437R

EXPERT SYSTEMS Expert systems, often called knowledge-based systems, are computer algorithms designed to solve problems in a manner similar to human reasoning (134-J36). The key component of the expert system is the knowledge base. The knowledge base consists of all the facts pertinent to the domain to which the expert system is applied. These facts may be common knowledge and readily available in textbooks, or they may consist of the cause and effect type relationships observed by experts within the field following years of experience, and which are not easily discerned by the novice. This latter component of the knowledge base, referred to as heuristic knowledge, is the experimental knowledge that a skilled expert acquires over years of working within a particular field. This aspect of the thought process is often the most dficult for an expert to make explicit v37). Categorization of heuristic knowledge along with the complexity of knowledge base maintenance, which requires specialized “knowledge engineering” expertise, has hindered widespread applications of expert systems (138,139). As a result, the application of expert systems in the area of laboratory test interpretation has generally been coniined to relatively narrow test subsets (134,J40-J42). The representation of knowledge in an expert system may take one of two approaches. The most common approach used to represent knowledge in expert systems is the use of production rules. A production rule takes the form of an IF-THEN statement, where IF represents the premise and THEN represents the conclusion drawn from the premise. Examples of expert systems based on the production rule format include the MYCIN system for evaluation of infectious disease (143) and ESPRE for platelet request evaluations (144). Each conclusion drawn from a particular premise contains a certainty factor ranging from -1 (complete disbelief), through 0 (nothing certain known), to 1 (complete certainty). The strength of a particular hypothesis is the sum of the certainty factors for or against the hypothesis. Uncertain or “fuzzy”data may be handled using other approaches which have been described (145,146). Use of the production rule format is adequate for relatively small, well-defined bodies of knowledge. However, as a body of knowledge increases in complexity, production rules tend to become unmanageable (147). The ability of the end-user of an expert system to modify or expand the system represents a signiiicant challenge v42). Recently, novel knowledge acquisition techniques, which enable rapid and simple knowledge acquisition by the domain expert without the need for a knowledge engineer or use of programming skills, have been developed and applied to a system for the interpretation of chemical pathology reports (138,148). A second approach used for knowledge representation in expert systems utilizes semantic networks or frames to represent knowledge. With this approach,patterns of cause and effect type relationships are stored within the system and connected by liiks representing the various relations and dependencies (142,145). A number of expert systems employing this type of knowledge representation have been developed v49-151). Newer systems that use causal pathophysiological knowledge to produce a causal explanation as its output have also been developed v52). Other systems have been devised which automatically update their knowledge base by remembering solutions to complex cases and then storing these particular cases for future reference (153). These systems thus bypass one of the 438R

Analytical Chemistty, Vol. 67, No. 12, June 15, 1995

current limitations of expert systems, which is the inability to recognize similarities between a new problem and a previously encountered one. NEURAL NETWORKS Interest in the field of neural network computing has undergone tremendous growth in the past 10years. However, the basic concepts of neural networks date back to the early 1940s (154). Increased interest in this area during the past decade has been primarily due to the ready availability of neural network software and powerful computers and invention of the back-propagation algorithm used in training neural network systems (155,156).The use of neural networks in the field of pathology and laboratory medicine has been fairly recent phenomenon. Applications within the area of anatomic pathology include image analysis for the interpretation of tissue aspirates, tissue sections, and serologic reactions (157-160). Use of neural networks for the interpretation of biochemical data include diagnosis of acute myocardial infarction (161-J63), interpretation of serum protein electrophoresis results (164, and interpretation of laboratory data for cancer diagnosis v65). The use of neural networks for the performance of standard statistical tasks such as regression analysis and discriminant analysis has also been described, although the networks tend to suffer from the dangers of chance effects (166). The explosion in medical information will undoubtedly continue, making it even more diflicult for the physician of the future to be facile with all the facts necessary for making an informed diagnosis. Data analysis with the help of neural networks may become of greater importance for interpretation of medical data. Two classes of neural networks are used today; back-propagation networks and the Hopefield-Boltzmann machine models. This review concentrates on the back-propagation models since these are the most common. Several excellent reviews describe the Hopfield-Boltzmann machine models (167,168). The basic structure of the neural network is the processing unit or “neuron”. The neuron receives input signals through weighted links, sums the weighted inputs, and then passes the resulting sum through an output function. In many networks the processing units are arranged in layers. Input signals (i.e., data) are feed into the first layer producing some output which in turn is fed to the next layer of processing units to eventually produce an output signal. The “knowledge” of a neural network resides in the values of the weighted links assigned to the processing units. The process most commonly employed for training neural networks is the back-propagation learning method. The backpropagation learning method has proven to be one of the most useful approaches for training these systems (169). The backpropagation learning method is iterative. The network typically requires many repetitions of each input data set, from a large collection of input data sets, before the weights are properly adjusted (170). The manner in which neural networks adjust their weighted interconnections during training can vary between the different network models. At the level of the individual processing unit, three possible output functions may be used: linear, step linear, and sigmoid functions (167). Neural network models typically employ the nonlinear output function in order to benefit from the use of additional layers that can be employed when nonlinear out functions are used (156,J71). In contrast to supervised learning previously described, some neural network systems have been designed for unsupervised

learning. These self-organizing neural network systems employ learning schemes whereby no training samples need to be provided; the network adjusts its internal weight factors autonomously, without reference to an external teacher. The training process is driven only by the presentation of the input data and by the described learning rule (172). These self-organizingneural networks are capable of performing tasks similar to classical cluster analysis or principal component analysis (173,174. Neural networks are a unique form of artificialintelligence and, therefore, are subject to some special caveats. One drawback often cited by users of these systems is the dif6culty in discerning the internal logic used for classiflcation (175,176). Virtually all understanding comes from observations of input data and outputs from the system rather than from an analysis of how the inputs are processed. The effect of a particular variable on the output of the network is highly dependent on the values of the other variables. Variables that might be unimportant when used alone might become significant when incorporated with the network (177). Attempts to analyze the internal logic of neural networks include pattern analysis of neuronal activity produced by a range input patterns as means for interpreting the relative importance of individual nodes in a network (178). Unfortunately, this type of analysis is difficult and frequently does not yield meaningful results. Thus, while the high accuracy of neural network diagnosis in comparison to human expert diagnosis merits attention, the evaluation of these systems is essentially limited to assessing the quality of the training data and performance of the system during cross-validation (175, 179). The use of ROC curve analysis as an unbiased measure of the accuracy of neural networks has also been described (180). The accuracy of neural networks is strongly influenced by a variety of issues related to network design and network training factors. With respect to network design, choice of input variables and their representation, the number of hidden neurons, and the assigned values for learning parameters can all influence network performance (181,182). Techniques that have been employed to overcome some of these weaknesses include the evaluation of m o d ~ e dlearning methods (183,J84) and investigations into the effect of manipulating network inputs 035). Overtraining of the network is another frequently encountered problem. Overtraining results in overspecialization of the network for the data present in the training set. This occurs because some networks reach a minimum in the crossvalidation error before reaching a minimum in the training data. This problem has been reviewed (1831, and strategies have been developed to help overcome some of these problems (186). INFORMATION THEORY To measure the informational value of a diagnostic test, one must quantitate the Uncertainty. The theoretical basis for the measurement of uncertainty, or entropy, was originally described by Shannon and Weaver (187). The quant3cation of this uncertainty forms the basis of information theory. The develop ment and application of diagnostic tests that exhibit increased sensitivity and specificity compared to those previously used should substantially increase our abilities to more accurately diagnose disease. However, even after the application of these “new and improved methods for disease diagnosis, a great deal of uncertainty still exists. The reasons for this have been attributed to various causes including biased comparisons of test

results to the “gold standard” and inappropriate selection of patients with respect to disease severity (188) and sampling errors due to small sample size effecting estimates of tests parameters such as sensitivity and specificity (189). Others have asserted that the fundamental problem may simply be the inability of tests to provide adequate information to overcome a priori uncertainty about the true state of the patient (190). Information theory attempts to address this last issue. The mathematical basis for quantitation of information in diagnostic testing has been reviewed (19,]91,J92), and computer applications have been published (193). The underlying principle of information theory is that uncertainty, measured before and after the performance of a diagnostic test, is represented as logarithmic information functions. The reduction in uncertainty following the completion of a diagnostic tests represents the information content of the test. The unit used to measure the informational valve of diagnostic tests is the bit. One bit of new diagnostic information doubles the probability of a given diagnosis. Thus, the number of bits of information provided by a test result compares the pretest probability of a disease (prevalence) with the posttest probability of the disease (predictive value) (189). When expressed as bits, test information is additive. Thus a sequence of two tests yielding 3 and 2 bits of diagnostic information provides as much information as a single test yielding 5 bits of information v9l). Graphical examples of the additive properties of bits of diagnostic information have been published (188). The information content provided by diagnostic tests varies considerably. The information content is dependent upon the characteristics of the diagnostic test, the cutoff threshold chosen, and the pretest probability of the disorder (194). One review, which analyzed the information content of diagnostic tests gathered from a wide range of studies published in the medical literature, found that the median information content of the tests that were described provided only 55%of the information required for diagnostic certainty (190). This study found that a hypothetical “average”test described in the medical literature, with a sensitivity of 88%and a speciflcity of 97%, provides only slightly more than 60%of the total information needed for making a deiinite diagnosis. Thus, the use of information theory may help, in part, to explain why signiflcant uncertainty about the true disease status of the patient still persists, even after the application of a test exhibiting high sensitivity and speciflcity (190). Information theory has also been combined with ROC curve analysis to evaluate and compare diagnostic test at their optimum cutoff using specified disease prevalence and test properties (192). Plotting test information as a function of test cutoff for various diagnostic tests shows that some tests to provide more information that others and that the information maximizing cutoffs for different tests varies considerably (195,197). In summary, information yielded by diagnostic tests is generally insufficient to overcome diagnostic uncertainty. Information theory provides a means for quantifying the uncertainty associated with diagnostic testing and can also help plan and evaluate strategies for reducing this uncertainty, Steven C.Kazmierczak is an Associate Professor in the Department of Pathology and Luborato Medicine at East Carolina University School of Medicine and Scient$c%rector of the Clinical Chemist Laboratories at Pitt County Memorial Hospital and the E.C.U. Schozof Medicine. He received has B.S. degree in biologv;from Youngstown State Un.zversaty an 1982 and his Ph.D. degree in clznical chemistyfiom The Ohao State University i n 1986. From 1986 to 1988 he was employed as a Analytical Chemistly, Vol. 67, No. 12,June 15, 1995

439R

Postdoctoral Fellow in biochemistry at n e Cleveland Clinic Foundation. His research interests lie in the areas of enzymology, method development and automation, and information theoy. Paul G. Catrou is a professor at East Carolina Universit School of Medicine and Pathologistin-Charge of Clinical Chemistry and fnformatics at Pitt County Memorial Hospital in Greenville, NC. He received his M.D. from Tulane Universityand completed a clinical pathology residency at the University ofAlabama at Birmzn ham Prior to joining the faculty of East Carolina, he was on the.facugy of h u k i a n a State Univers4ty School ofMedzczne and was the Dzrector of Chemzcal Pathology at Chanty Hos ita1 of Louisiana at New Orleans. Dr. Catrou's interests are in the appi'cation of informatics technplo to patient Care. He has authored and coauthored book chapters zn gnical chemzsty, original research papers, and software packages and has presented national workshops and semznars zn laboratoy medzczne and rnformatics.

Pince. H.: Cobbaert. C.: van de Woestiine. M.: Lissens. W.:' Willems, J. Int. J. Biomed. Comput. 198'8,23, 251-263.' Winkel, P. Clin. Chem. 1989,35,1595-1600. Shortliie, EH. Comouter-Based Medical Consultations: MYCIN; Elsevier: New York, 1976. Sielaff, B. H.; Scott, E.; Connelly, D. P. ESPRE: Expert System for Platelet Request Evaluation; Proceedings pf the Eleventh Annual S posium on Computer Applicahons in Medical Care; IEEE: York, 1987; 237. Frenzel, L. E. Understanin Expert Systems; Howard W. Sams and Co.: Indianapolis, 1 9 8 f Sandell, H. S. H.; Bourne, J. R. Crit. Rev. Biomed. Eng. 1985,

a

1295-1299.

F;:?m,

K. J.; Weinstein, R. S. Hum. Pathol. 1985,16,1082-

IVOY.

Compton, P.; Jansen, R. Knowled e in Context: A Strategy for Expert System Mamtenance. In 388.Lecture Notes in Art8cial Intelligence; Barter, C.. Brooks, M.. Eds.: Svnnaer-Verlag: New YorkJ990; p 292-306. Miller, R. A.; Fople, H. E.; Myers, J. D. N. Engl. J. Med. 1982, . -? -0.7 468-476 .- - . -. Wei'ss, S. M.; Kulikowski, C. A; Amarel, S.; Safir, A. Artif:Intell. 1978,11, 145-172.

LITERATURE CITED

1

Weinstein, M. C.; Fineberg H. V. Clinical Decision Analysis; W. B. S u n d e r s Co.: Philadelphia, 1980. Linnet, K. Clin. Chem. 1988,34,1379-1386. Johnson, H. A. Ann. Clin. Lab. Sci. 1993,23,159-164. Jensen, A. L.; Poulsen, J. S. D. J. Vet. Med. 1992,39,656-

-

I

668.

Zweig, M. H.; Campbell, G. Clin. Chem. 1993,39,561-577. Hilgers, R. A. Methods Inform. Med. 1991,30,96-101. Schafer, H. Stat. Med. 1994,13,1551-1561. Van.Der Schouw, Y. T.; Straatman, H.; Verbeek, A. L. M. Med. Decrsion Makrng 1994,14,374-381. Somoza. E.: Mossman. D. J. Neurobsvchol. Clin. Neurosci. 1992. 4,214-219. Jaeschke, R.; Guyatt, G. H.; Sackett, D. L. JAMA, J Am. Med. ASSOC. 1994,271, 703-707. Radack, K. L.; Rouan, G.; Hedges, J. Arch. Pathol. Lab. Med. 1986,110, 689-693. Peirce, J. C.; Comell, R. G. Med. Decis. Making 1993,13,141-

__

Astion, M. L.; Wilding, P. Arch. Pathol. Lab. Med. 1992,116,

wx-inni 111

151

Fiinstein, A. R Clinical Biostatistics; C. V. Mosby Co.: St. Louis, MO, 1977. Kachigan, S. K. Statistical Analysis: A n Interdiscz linary Introduction to Univariate & Multivariate Methods; ;&dim Press: New York, 1986. O$soo, T.; Fiet, J.; Raynaud, J.; Dore, J. J. Steroid Biochem. 01. Biol. 1993,46,183-193. Csako, G.; Zwei , M. H.; Ruddel, M.; Glickman, J.; Kestner, J. Clzn. Chem. 1980,36,645-650. Shott, S. J. Am. Vet. Med. Assoc. 1991,198,1902-1905. Davalos, A; Femandez-Real,J. M.; Ricart, W.; Soler, S.; Molins, A.; Planas, E. Genis, D. Stroke 1994,25,1543-1546. Kliegman, R. M.; Madura, D.; Kiwi, R; Eisenberg, I.; Yamashita, T. J. Pediatrics 1994,124,751-756. Contreras, A. M.; Ramirez, M.; Cueva, L.; Alvarez, S.; de Loza, R.; Gamba, G. Rev. Invest. Clin. 1994,46,37-43. $::berg, H. E. CRC Crit. Rev. Clin. Lab. Sci. 1978,9,209-

g;,;W.T.; Snell, J. W.; Merickel, M. B. Methods Enzymol. 1992, -1L3I.

210, 610-636.

L4L.

Lacher, D. A.; Baumann, R. R.; Boyd, J. C. Am. J. Clin. Pathol. 1988.89, 753-759. Volmer,,M.; Musket, A. J.; Hindriks, F. R.; van der Slik, W. Ann. Clan. Bzoc_hem. 1991,28, 379-385. Corsetti, J. P.; "ox, C.; Schulz, T. J.; b a n , D. A. Clin. Chem. 1993,39,249: 5-2499. Ameglio, F.; Gi;innarelli, D.; Cordiali-Fei, P. Am. J. Clin. Pathol. 1994,101', 714-775 . -" Viedma, J. A.; Perez-Mateo, M.; A 110, J.; Dominguez, J. E.; Carballo, F. Gut 1994,35,82-88?, Sillanaukee, P. Arch. Pa thol. Lab. Med. 1992,116,924-929. Jimenez, C. V. Clin. Chem. 1993,39,22;71-2275. Bose, C. K.; Mukhe rjea, M . Can. Lett. 19194,77,39-43. Lacher, D. A. Clin. Chem. Acta 1991,204,199-208. Odom-Maryon, T.;Langholz, B.; Niland, J.; Azen, S. Stat. Med. 1991,10,473-485. O'Gonnan, T. W.; Woolson, R. F. Stat. Med. 1993,12, 143-

_"1*.

Hinton, G. E. Sci. Am. 1992,145-151. Woiberg, W. H.; Mangasarian, 0. L. Anal. Quant. Cytol. Histol. 1990,12,314-320. Dawson, A. E.; Austin, R E.; Weinberg, D. S. Am. J. Clin. Pathol. 1991,95,S29-S37. De Cresce, R. P.; Lifshitz, M. S. PAPNET cytological screening system. Lab. Med. 1991,22, 276-280. Bartoof, G. T.; Les, J. S. J.; Bartels, P. H.; Kiviat, N. B.; Nelson, A. C. Lab. Invest. 1992,66,116-122. Baxt, W. B. Ann. Intern. Med. 1991,115, 843-848. Goldman, L.; Cook, E. F.; Brand, D. A; et al. Am. J Clin. Pathol. 1991,96,134-141. Furlong J. W: Du u , M. E.; Heinsimer, J. A. N. Engl. J. Med. 1988,518,757-8)Ol Kratzer, M. A. A.; Ivandic, B.; Fateh-Moghadam, A. J Clin. Pathol. 1992,45,612-615. Astion, M. L.; Wilding, P. Clin. Chem. 1992,38,34-38. kibipg$:/:, P. J.; Manallack, D. T. J Med. Chem. 1993,36,

Won A J Biol. Cybernetics 1988,58,361-72. Cau&I, M:; Butler, C. Naturally Intelligent Systems. MIT Press: Cambndge, 'QQn Sittig, D. F.; Orr, J. A. Comput. Biomed. Res. 1992,25, 547IIUI,

I.,.,".

5til.

Hinton. G. E. Artificial Intelligence. Connectionist Learning Procedures 1989,40 (1-3), 185-234. Reibnegger, G.; Weiss, G.; Wachter, H. Eur. J. Clin. Biochem.

I

100.1 _ _ _ _ , _3 _7, I311-316 __

Baldi, P.; Homik, K. Neural Networks and Principal Component Analyszs 1989,(Neural Networks 2), 53-58. U74) Vogt, W.; Nagel, D. Clrn. Chem. 1992,38,182-198. H a c A.; W att J. Med. Inform. 1990,15,229-236. Astion. M. Bloch. D. A.: Werner. M. H. I. Rheumatol. 1993. 20, 1465-1467 (Editorialj. Baxt, W. G. Ann. Emer Med. 1992,21, 1439-1444. Gorman, R.P.; Sejnowsfi, J. T. Neural Networks 2988,1,75-

88

E:

89

151.

;Tb;urassi, G. D.; Floyd, C. E.; Sostman, H. D.; Colman, R. E. Radzolo 1993,189,555-558. Meistref M. L. Comput. Methods Prog. Biomed. 1990,32,73-

Kulikowski, C. A. Artiicial Intelligence Methods and Systems for Medical Consultation. In Reading in Medical Artr cia1 Intelligence; Clancey, W. J., Shortliie, E. H., Eds.; Ad&sonWesley: Readin MAJ984; pp 72-97. Weiss, S. M.; Ku%kowski, C. A. A Practical Guide to Desi ning Expert Systems; Rowman and Allanheld: Totowa, NJ, 1 9 h ; pp

%en) J. F.; Pollack, J. B. Back Propagation is Sensitive to Initial Condiuons. In Advances in Neural In ormation Processing Systems; Lippmann, R. P., Mood J. E., ouretzky, D. S., Eds.; Morgan muffman Publishers: $an Mateo, CA, 1991,Vol. 3,

Bello, A. L. Stat. Med. 1994,13,1793-1795. Van Lente, F.; Castellani, W.; Chou, D.; Matzen, R. M.; Galen, R. S. Clin. Chem. 1986,32,1719-1725.

1-15.

Shortliffe, E. H.; Feigenbaum, E. A. SUMEX Stanford University Medical Expenmental Computer Resources. Division of Research Resources Grant RR-00785 Annual Report-Year 14, 1987.

Edwards, G.; Compton, P.; Malor, R.; Srinivasan, A.; Lazarus, L. Pathology 993,25, 27-34. Spackman, K.; Connelly, D. Arch. Pathol. Lab. Med. 1987,111, 116-119.

Furlong, J.; Dupuy, M.; Heinsimer, J. Am. J. Clin. Pathol. 1991, 96,134-141. AnalyticalChemistry, Vol. 67,No. 12, June 15, 1995

an

?f

p 860-867.

Aimad, S.; Tesauro, G. Scalin and Generalization in Neural

Networks: A Case Study. In ffdvances in Neural Information Processing Systems; Touretzky, D. S., Ed.; Morgan Kufhnan Publishers: San Mateo, CA, 1989; Vol. 1, pp 160-168. Tishby. N., Levin, E. Consistent inference probabilities in layered networks: redichons and generalization. Proc. Int. Joint. Con& Neural fretworks 1989,2, 403-409. Sietsma, J.; Dow, R. J. F. Neural Networks 1991,4,67-79. Holmstrom, L.; Koistinen, P. IEEE Trans Neural Networks 1992,3,24-37. Astion, M. L.; Wener, M. H.; Thomas, R. G.; Hunder, G. G.; Bloch, D. A. Arthritis Rheum. 1992,35,S166.

u87) Shannon, C. E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Chica 0,1949. u88) Ransohoff, D. F.; Feinstein, A. R. N. Engl. J. Med. 1678,299, 926-930. u89) Johnson, H. A. JAMA, J. Am. Med. h o c . , 1991,265,2229-

188.

A. Ann. Clin. Lab. Sci. 1989 19, 323-331. P. S. Methods Inform. Med. 1 3 9 0 , 29, 61-66. Mossman, D. Med. Deck. Making 1992,12,179-

u93) Somoza, E.; SoutuUo-Esperon, L;Mossman D. Int. J. Biomed. Comput. 1 9 8 9 , 2 4 , 153-189. 094) Somoza, E.; Mossman, D. Neurophys. Clin. Neurosci. 1 9 9 2 , 4 , 95-98. 095) Heckerling, P. S. J. Gen. Intern. Med. 1 9 8 8 , 3, 604-606. 096) Somoza, E.; Mossman, D. Biol. Psychiat. 1990,27,990-1006. u97) Mossman, D.; Somoza, E. J. Neuropsychiaty. Clin. Neurosci. 1 9 9 2 , 4 , 95-98.

Analytical Chemistry, Vol. 67, No. 12,June 15, 1995

441R