Anal. Chem. 2010, 82, 4307–4313
Computational Neural Networks Driving Complex Analytical Problem Solving Grady Hanrahan California Lutheran University
Learning systems inspired by the brain’s neural structure exhibit intelligent behaviour without structured symbolic expressions.1 They have the ability to learn by example through highly interconnected processing elements, which is a key feature of the neural network computing paradigm.2 Foundational concepts can be traced back to seminal work by McCulloch and Pitts on the development of a sequential logic model of a neuron, in which simplified diagrams representing the functional relationships between neurons conceived as binary elements were introduced.3 Subsequent developments by Rosenblatt,4 Widrow and Hoff,5 and Minsky and Papert6 introduced the scientific community to a network based on the perceptron, the basic framework by which modern day neural network developments are conceived and complex analytical problems are solved. Although the broad range and applicability of neural networks is established, the data-rich needs of modern chemical research demand new and more efficient networks. Fortunately, the recent development of novel learning algorithms and hybrid neural techniques and gains in raw computing power and speed have raised several fundamental analytical chemistry research lines. For example, neural networks have provided a powerful inference engine for regression analysis and analyte quantitation,7,8 enhanced the prediction of chemical and physical properties from mid- and near-IR spectroscopy,9,10 proved effective in quantitative structure-activity relationship (QSAR) studies,11,12 and improved pattern recognition and classification capabilities.13,14 Neural networks can be regarded as an extension of the many conventional statistical tools in everyday research, with superior performance dependent upon the interconnectivity between layers and the nature and level of pre-processing of the input data. The intent of this article is to provide broad insight into the development 10.1021/ac902636q 2010 American Chemical Society Published on Web 04/28/2010
KANJANA PATCHARAPRASERTSOOK
Neural network computing demonstrates advanced analytical problem solving abilities to meet the demands of modern chemical research. (To listen to a podcast about this article, please go to the Analytical Chemistry multimediapageatpubs.acs.org/page/ancham/audio/index.html.)
and application of computational neural network tools that can increase the robustness and transparency of neural computing techniques as methods for routine chemical analysis and experimentation. (Table 1 provides definitions of important network terminology.) THE BIOLOGICAL MODEL Proper understanding of neuronal and synaptic physiology and knowledge of the complex interconnections between neurons in the brain are central to comprehending how computers can exhibit even minimal neural-network-like behaviour.15 Four main regions Analytical Chemistry, Vol. 82, No. 11, June 1, 2010
4307
Table 1. Key Concept and Terminology Table
Figure 1. Neurons organized in a connected network, both receiving and sending impulses.
Figure 2. A basic multiple-input computational neuron model. Individual scalar inputs are weighted appropriate elements w1, w2, w3,.. .,wR of the weight matrix W. The sum of the weighted inputs and the bias b (equal to 1) forms the net input n, proceeds into a transfer function f, and produces the scalar neuron output a. The terms x1, x2, x3,.. .,xR are the individual inputs.
THE COMPUTATIONAL MODEL A biological neuron represented in simplified computational form is the mathematical building block of neural network models. The basic operation of these models can be illustrated by examining the multiple-input artificial neuron shown in Figure 2 and considering how interconnected processing neurons work together to produce an output function. The output of a neural network relies on the functional cooperation of the individual neurons within the network with parallel information processing. The individual inputs x1, x2, x3,.. .,xR are each weighted with appropriate elements w1, w2, w3,.. .,wR of the weight matrix W. The sum of the weighted inputs and the bias forms the net input n, proceeds into a transfer function f, and produces the scalar neuron output a, written as: a ) f(Wx + b)
comprise a prototypical neuron’s structure (Figure 1): the cell body (soma), dendrites, axons, and synaptic knobs. The soma and dendrites represent the location of input reception, integration, and coordination of signals arising from pre-synaptic nerve terminals. The physical and neurochemical characteristics that arise from the movement from the pre-synaptic to the post-synaptic membrane determine the strength and polarity of the electric input signals, or action potentials, that convey information in the brain. Signal propagation from the dendrite and soma occurs from the axon and down its length. Ostensibly, each neuron operates like a simple processor with the connection structure between neurons being dynamic in nature. This adaptive connectivity provides the human brain with the ability to learn. 4308
Analytical Chemistry, Vol. 82, No. 11, June 1, 2010
Revisiting the biological neuron pictured above, the weight w corresponds to synapse strength, the summation represents the cell body and the transfer function, and a represents the axon signal. The output is binary and depends on whether the input meets a specified threshold T. If the total net input is