Understanding Complexity in Biophysical Chemistry - ACS Publications

science involving the evolution of chaotic behavior via a torus attractor are reviewed. Introduction. Much of my research is aimed at furthering our u...
1 downloads 0 Views 353KB Size
J. Phys. Chem. B 2003, 107, 415-429

415

FEATURE ARTICLE Understanding Complexity in Biophysical Chemistry Raima Larter Department of Chemistry, Indiana UniVersity-Purdue UniVersity at Indianapolis, 402 North Blackford Street, Indianapolis, Indiana 46202 ReceiVed: April 1, 2002; In Final Form: August 2, 2002

Nonlinear science is ideal for investigating complex problems that arise in biophysical chemistry. Here, we review the approach used in nonlinear science and illustrate the techniques for several problems studied in our group: the peroxidase-oxidase reaction, the influence of nonuniform fields on membrane transport, and finally, neuronal systems and calcium signaling. The mathematical tools that have been derived in the process of studying these systems are also reviewed. Finally, our contributions to some fundamental issues in nonlinear science involving the evolution of chaotic behavior via a torus attractor are reviewed.

Introduction Much of my research is aimed at furthering our understanding of the origins of rhythmic behavior in living systems; the rhythmic beating of our heart is but the most obvious of the rhythms that sustain life. Other examples, although less familiar to most people, are no less important. β-cells in the pancreas, for example, produce insulin in squirts; these periodic pulses are themselves associated with, perhaps driven by, rhythmic oscillations in calcium in the cytoplasm of these cells.1,2 Calcium oscillations, as it turns out, are widespread in the body: in addition to β-cells in pancreas, liver, muscle, egg, nerve, and many other cell types all exhibit calcium oscillations.3-7 These oscillations are generally the result of a hormone or other signaling molecule binding to a cell-surface receptor and initiating a cascade of events that eventually results in one or more bursts of calcium in the cell cytoplasm. Because calcium ions serve as a chemical switch in living cells, initiating many important events including protein synthesis, the pattern of bursts in cytosolic calcium may carry important information from the original agonist (the hormone, for example) to the cellular apparatus regarding environmental changes, needs by the organism for products produced by that particular cell, etc. In addition to cellular-level oscillations such as these calcium signals, rhythmic behavior also arises at the metabolic network level; it is a little-known but very interesting fact that hormone levels in the human bloodstream peak five or six times a day,8 providing an oscillation in the key control species for many physiological processes. One of the first discoveries of oscillations arising at the metabolic network level involved the ubiquitous glycolysis process that many organisms use to extract energy from glucose. The discovery in the early sixties of oscillatory behavior in this sequence of enzyme-catalyzed reactions9-11 paved the way for later studies of oscillatory phenomena in coupled networks of enzyme reactions, including the peroxidase-oxidase or PO reaction, which my group studied extensively during the last two decades.12,13 The approach that we have taken in exploring the question of biorhythmicity involves the tools and techniques of nonlinear

science.14,15 This field is concerned with the description of dynamic behavior of systems that obey nonlinear laws, so the oscillatory phenomena described above are naturally suited for this approach. Nonlinear Science: Tools for Understanding Complexity One characteristic that distinguishes nonlinear science from other fields of study is that the objects of investigation are dynamic behaViors (such as rhythms or oscillations), which may arise in any number of systems spanning the whole series of organizational or hierarchical levels of increasing complexity that describe nature. Figure 1 shows the usual way that we think about this hierarchy; at the bottom are the fundamental subsubatomic particles (quarks and the like), which make up protons, neutrons, and electrons. This is the world of high-energy physics. The particles, which are described by the laws that hold on this lowest of the organizational levels, come together according to these laws in such a way that atoms form. At the next level, the atomic level, we have a similar situation; certain laws (quantum-mechanical ones, generally, which fade into classical mechanical laws at larger masses and slower speeds) are found to constrain atoms to behave in a certain way. The organizational laws at this level allow the atoms to bond together, if the right conditions hold, forming molecules. This, of course, leads to the next organizational level, molecules, the traditional realm of chemistry. Whether we know all of the “laws” that hold at the molecular level is highly doubtful, although much, of course, is understood about the rules that molecules of various classes and categories obey when interacting and chemically reacting with one another. In other words, we largely understand what makes molecules tick, although expressing this knowledge in a set of laws is not something most chemists would attempt to do. The next organizational level in this hierarchy (which, when complete, would describe the biosphere) is that of molecular biologysa very complex level indeed. Here, the molecules of life interact in many and varied ways, transcripting, translating, signaling, being transported in various ways, reacting in other

10.1021/jp020856l CCC: $25.00 © 2003 American Chemical Society Published on Web 12/17/2002

416 J. Phys. Chem. B, Vol. 107, No. 2, 2003

Larter

Figure 1. Hierarchy of levels in traditional science. The center box shows increasing levels of complexity in organization rising from the subatomic through the atomic, molecular, cellular, organismal, societal, and ecological system levels. The leftmost column shows schematic illustrations of the fundamental “particles” that correspond to each level. The rightmost column lists a few of the universal dynamic behaviors that have been observed at many of these levels and of which the fundamental bases transcend those levels.

ways; the “laws” that describe this organizational level (if there are any) are also many and varied, and although there is a great deal of scientific work going on at this organizational level, it is still too early to discern what the overarching principles are that allow us to move to the next level, that of a fully functioning living cell. In other words, many processes that occur in living systems are being sorted out and elucidated in detail, but we still do not understand how these processes confer upon a collection of molecules the property that we call “life”. Continuing on up this hierarchy, we gradually move into the realm that is sometimes referred to as “soft science,” even though these higher levels of complexity are really much “harder” to understand and study using the tools and techniques of science. Figure 1 is a schematic description, of course, of the traditional reductionist approach to science; we hope that by breaking the world into smaller and smaller pieces we will eventually approach a level and depth of understanding that will allow us to put all of the knowledge back together again into a useful and workable description of the whole. However, as Philip Anderson so aptly put it in an essay in Science 30 years ago,16 “The reductionist hypothesis does not by any means imply a ‘constructionist’ one: the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.” So, what has caught and held my attention throughout my career is the question of how to move across the rather fuzzy boundaries between categories in this hierarchy, particularly, the intriguing one separating what we call “life” from simple, inert, nonliving matter. I agree with Anderson: even though we know how to break the world down into smaller and smaller pieces, we still do not know how to put it back together again

Figure 2. Examples of spiral wave phenomena: (a) spiral waves of oxidative activity in the Belousov-Zhabotinsky reaction (reproduced from Goldbeter, A. Biochemical oscillations and cellular rhythms; Cambridge University Press: Cambridge, U.K., 1996, p 169. (b) dark field waves showing aggregation patterns in D. discoideum. (Weijer, C., University of Dundee. Personal communication); (c) spiral wave of calcium concentration in a slice of cultured rat hippocampal tissue (reproduced from Harris-White, M. E. et al. J. Neurophysiol. 1998, 79, 1045-1052).

into functioning wholes, particularly liVing wholes. Is it merely a jump (albeit a huge jump) in complexity that defines the boundary between life and nonlife? Or is it the existence of properties that life possesses (but that nonlife does not) that defines the boundary? If we begin to list these properties, we find that the answer to this question is very elusive. Some of the more commonly used definitions of life include the ability to reproduce, the ability to adapt to changing conditions, the ability to increase in complexity, and so on, but all of these indicators can, individually or sometimes even in combination, be observed in what are clearly nonliving systemsssometimes even software-based simulations of life! Some of the more striking examples of the similarity in behavior between living systems and what is clearly nonliving, inert matter have been discovered using the approaches of nonlinear science. Spiral waves of oxidative chemical activity in an inorganic reaction medium (the Belousov-Zhabotinsky or BZ reaction,17 see Figure 2a) bear not only a superficial resemblance in terms of shape and form to aggregation patterns in the slime mold Dictyostelium discoideum9 (see Figure 2b) and to spiral waves of calcium in a field of brain cells18 (see Figure 2c) but also a deeper similarity in terms of underlying mechanistic origin. In the case of D. discoideum (Figure 2b),

Feature Article the spiral waves arise when a group of single-celled organisms aggregate into a lump of cells and begin the process of differentiation, eventually developing into a slugsa multicellular organism! The waves, then, are evidence of the first step in the transition from single-celled life to multicellular life. It has been shown quite definitively9,19 that the mechanism that gives rise to spirals in the aggregating slime mold is very similar to the one that organizes the chemical reaction (Figure 2a) into spiral waves of oxidized and reduced chemical activity. Although the similarity in pattern of the intracellular calcium concentration in a hippocampal culture from a rat brain (Figure 2c) is not enough evidence to state that it, too, is due to the same (or similar) mechanism as those spirals in Figure 2a,b, we do know that the dynamic properties of this piece of excitable brain tissue are expected to be very much like that of the BZ medium and D. discoideum. This expectation, then, can help narrow the search for the actual mechanism that produces the spiral wave activity in Figure 2c. An example of this sort of “guided search” technique helped to elucidate the mechanism for the formation of spiral waves of calcium, which were first observed several years ago in fertilized frog oocytes.20 This system was studied by looking for possible wave generation mechanisms similar to those that exist in the BZ system. In this case, the spiral waves of calcium in oocytes were found to be due to a calcium-induced calcium-release (or CICR) mechanism,5 which is similar dynamically to the autocatalysis that occurs in both the BZ and D. discoideum cases. The point of this example is that the study of mechanisms that give rise to complex temporal and spatial behavior in nonliVing systems using the tools and techniques of nonlinear science has started to give us glimpses into the origin of complexity in liVing systems. Another way to look at the hierarchy in Figure 1, then, is through the eyes of a nonlinear dynamicist. In the right-most column of this figure, I have listed examples of dynamic phenomena (such as spiral waves) that have been observed across several organizational levels of the hierarchy. These behaviors are uniVersal phenomena in the sense that they transcend the traditional hierarchy of levels of organization, not only in similarity of shape and form but also in matching mechanistic details. These universal dynamic phenomena are not restricted to one or even a few of the traditional organizational levels. The universal phenomena of interest to the nonlinear dynamicist (bistability, simple and complex oscillations, spiral waves, target patterns, and so on) have been observed in many or even all of the organizational levelss including, perhaps, even the “top” one of an entire global system,21-23 that is, the atmosphere. One recent suggestion of this latter possibility is that the global climate system may be bistable and that, perhaps, several times in our geological history a fluctuation has been large enough to cause a transition from one stable state to another, initiating or ending an ice age. While this is an intriguing possibility, and definitely in the realm of the possible, it remains only a suggestion and is deserving of further study. The nonlinear dynamicist, then, is interested in finding “laws”, if there are any, that transcend the traditional organizational levels (which, by the way, correspond to the usual academic departments of physics, chemistry, biology, etc.) and allow for the existence of these universal dynamic phenomena at many levels. In a sense, this stage of searching for transcendent laws is somewhat like the early days of the development of the atomic theory, in which certain scientists (not all) were convinced that there was something fundamentally the same about earth, air,

J. Phys. Chem. B, Vol. 107, No. 2, 2003 417 water, fire, and everything else. What this “something” was, as John Dalton insisted, was the atom! All four of the Aristotelian “elements” are, of course, actually composed of atoms of what we now consider to be the true elementsshydrogen, oxygen, carbon, and so onsbut in the early days of the atomic theory, the argument that a concept as vague and abstract as “the atom” could explain phenomena as diverse as the surface tension of water or the burning of a candle was really quite controversial. The nonlinear dynamicist is, in a similar way, searching for “something” that connects diverse phenomena that might otherwise appear to be unrelated. The hope is that we will find fundamental dynamic features (akin to atoms) that give rise to uniVersal dynamic behaViors (akin to atomic and molecular properties) such as bistability, oscillations and chaos, spiral waves, etc.sregardless of whether the system of interest is an inorganic chemical reaction, a biochemical network, a whole cell, a complete organ in the body, or even a self-contained ecological system. So, what is our best guess, so far, about what these fundamental dynamic laws or principles might be? First of all, it is clear that feedback is of utmost importance in systems that exhibit the universal dynamic phenomena that nonlinear scientists study (bistability, oscillations, etc). The feedback is sometimes positiVe in character, that is, providing a source of amplification, and sometimes negatiVe in character, that is, providing a means by which fluctuations can be damped out. Often both types of feedback exist in one system. Systems (dissipative systems, in particular) that contain these feedback elements seem to always have dynamics that are governed by attractors; a system that exhibits oscillations, for example, does so because a stable limit cycle attractor exists for that system. Another common feature of systems that exhibit nonlinear phenomena is that transitions between dramatically different modes of behavior can occur without a change in the underlying mechanism. These kinds of transformational changes known as bifurcations happen when a key parameter passes through a critical value. No change in underlying mechanism occurs, but the attractor that governs the dynamics of the system changes form when this transition occurs; the underlying mechanism remains exactly the same, while the dynamics change dramatically. The bifurcation phenomenon is very much like that of a phase transition24,25 in which, for example, a change in temperature through just the right value (e.g., the melting point) causes a dramatic change in physical state (from solid to liquid in this example). A bifurcation happens when a change in a key system parameter causes a dramatic change in dynamic state, taking a system at steady state, for example, to an oscillatory state. In this example, the underlying attractor for the initial state (which is a point attractor) becomes unstable when the key parameter passes through its bifurcation value, giving rise to a new stable attractor; here, the new attractor is the limit cycle. The result of this bifurcation process is the following: The dynamics of the system cease to be governed by the point attractor and begin to be governed by the newly stable limit cycle attractor. The point attractor pulled the system toward steady, quiescent behavior before the bifurcation. After the bifurcation, the limit cycle pulls the system toward oscillatory behavior. The system develops stable oscillations when this situation occurs, with no change in underlying mechanism but just a shift in a key parameter through a critical value. So, feedback, attractors, and bifurcations are just some of the universal dynamic features that nonlinear dynamicists have identified as fundamental to behaviors such as bistability, regular and chaotic oscillation, spiral waves, etc. They are uniVersal in

418 J. Phys. Chem. B, Vol. 107, No. 2, 2003

Larter

TABLE 1: A Short Atlas of Cellular Oscillatorsa preparation purified horseradish peroxidase heart muscle extract acetabularia cultured mouse fibroblast rat liver mitochondria algae E. coli rat, in vivo skinned muscle fibers frog neuromuscular junction mouse pancreatic islet cell sea-urchin embryo locust fireflies a

event periodic rate of catalyzed oxidation oscillation in creatine kinase activity oscillation in lactate dehydrogenase and malate dehydrogenase activity oscillation in respiratory activity oscillatory ion movement oscillation in dark cycle photosynthesis periodic synthesis on β-galactosidase oscillation in heme biosynthesis oscillatory contractions stimulated by caffeine oscillation in transmitter release membrane potential oscillation cyclic protein synthesis intrinsic rhythm in jumping muscle periodic flashing

period 1 min 3-10 min 1-3 min

Figure 3. Oscillatory activity in the glycolysis reaction. The oscillations in NADH are recorded by measuring its fluorescence. Oscillations in pH also occur. (Reproduced from Goldbeter, A. Biochemical oscillations and cellular rhythms; Cambridge University Press: Cambridge, U.K., 1996, p 32.)

1-3 min 1-5 min 80 min 50 min 10 h 70 s 2-14 s 0.5-10 s 0.5-1 h 20-30 s 0.5-1 s

Adapted from Rapp, 1979 (ref 26).

the sense that they transcend categories in the hierarchy in Figure 1 and fundamental in that they determine which of several dynamic behaviors a nonlinear system will exhibit. The concepts briefly described here are best learned by consulting one of the newly available textbooks in the field of which those by Epstein and Pojman14 and Nicolis15 are most closely focused on physical chemistry examples. The Origins of Rhythm in Living Systems Rhythmic behavior in biological systems is widespread and occurs at different levels of organization, yet as we have seen, many of the mechanisms that underlie these different types of oscillations are similar. The frequencies of biological rhythms range from very fast oscillations with periods in the subsecond range to slow ones in which the pulses might be hours apart. In 1979, Paul Rapp created an atlas of cellular oscillators,26 which was a complete and detailed listing of different types of biological rhythms that had been observed in different cell types at the time of publication; a representative table of examples from this atlas is given in Table 1 and illustrates the range of cell types, frequencies, and organizational levels at which biological rhythms have been observed. Additional examples have been discovered in the intervening years, but Rapp’s “Atlas” remains one of the best overviews of the range of cellular oscillators that exist. One of the most well-studied cellular oscillations at the time of Rapp’s publication was the glycolytic oscillator.9-11 First observed in yeast suspensions, this chemically based oscillation arises from a small biochemical network of reactions, which, taken together, work to reduce glucose to carbon dioxide and water. Oscillations in all chemical species involved in the reaction occur; one of the easiest to observe is NADH (nicotinamide adenine dinucleotide), which is easily detectable through its UV absorbance.27 Figure 3 shows an early example of the typical behavior observed when oscillations were discovered in this reaction. Glycolysis is one of the fundamental biochemical events associated with the extraction of energy from food, and it is now firmly established that this basic metabolic

process occurs in an oscillatory way. The oscillations first observed in suspensions of intact cells were shown, by Benoit Hess and co-workers who used yeast extracts, to be independent of the mitotic cycle of the yeast cellssin other words, the periodicity does not come from the underlying cell cycle but from the chemistry of the reduction process. The accepted mechanism for the glycolysis oscillations includes a number of feedback loops in the network of enzyme-catalyzed reactions, and an explanation based purely on chemical kinetics for this small metabolic network can satisfactorily describe the experimental observations. Ross has speculated that this and other periodicities in living systems evolved because the oscillatory mode leads to greater throughput and efficiency.28 However, the fundamental reason that living systems should exhibit such a plethora of rhythmic phenomena is still largely unknown. Since the time of the early work on glycolysis, other biochemically based oscillations have been discovered and studied. One that our group was extensively involved in investigating is the peroxidase-oxidase or PO reaction. In some ways, this reaction is somewhat simpler than the glycolysis reaction because it involves a single enzyme rather than several as in the glycolysis oscillator. The kinetics of the reaction, however, are just as complicated. While some controversy still remains regarding details of the mechanism, it has been established13 that the key feedback process involves a free radical formed when a hydrogen atom is abstracted from NADH. The PO reaction has become somewhat of a prototype for enzyme-based chemical oscillators because it exhibits essentially all of the exotic dynamics observed in these systems: bistability between multiple steady states, simple oscillations, bistability between multiple periodic states, complex oscillations including those of the mixed-mode type, and chaotic behavior. The similarity between the behavior of the PO reaction and other purely inorganic chemical oscillators, such as the BZ reaction, has allowed us to draw wide-ranging conclusions about the key features of oscillatory behavior that can be attributed purely to chemistry. Our work on the PO reaction will be reviewed in more detail in a later section. The research topics that I have pursued, both individually and in collaboration with others, can be summarized into two broad categories: (1) experimental and computational investigations of the effects that nonlinear dynamic behavior (such as oscillations and spatial patterning) may have on fundamental processes (such as transport, catalysis, and signaling) that occur in biological systems and (2) the development of new theoretical and mathematical tools for the determination of dynamic features of a mechanism, particularly chemical mechanisms, which give rise to oscillatory behavior, both regular (periodic) and complex (aperiodic or chaotic). The motivation for research in both of these categories was, and is, a personal conviction that the methods and techniques of nonlinear science are the most promising tools yet developed for exploring fundamental issues involving the origin of complex behavior in living systems. In many ways, nonlinear science is a paradigm-shifting approach to scientific investigation in that it requires a willingness to

Feature Article

J. Phys. Chem. B, Vol. 107, No. 2, 2003 419

compare phenomena across a range of systems that may seem completely unrelated. The reward for this bold approach, however, has been a number of exciting insights into specific biological questions that would have been quite vexing without the ways of thinking provided by the techniques and tools of nonlinear science. The particular problems studied by our group are by no means more important than others in the field but are good examples of the types of questions that can be explored using the techniques of nonlinear science. The interested reader is invited to consult the growing number of books9,14,15,29-33 that summarize other research in this area. Use of Nonlinear Dynamics Techniques to Explore Specific Biological Questions The Peroxidase-Oxidase Reaction: A Prototypical Enzyme Reaction? The PO reaction has often provided the specific context for our studies in both categories mentioned above; therefore, while some of our results have general applicability to a range of nonlinear dynamic systems, we have also found out a great deal along the way about the PO reaction itself, especially in terms of the detailed chemical mechanism that leads to its rich dynamic behavior. This aspect of our work has been carried out in collaboration with and extended by several other groups around the world, most notably by Alex Scheeline, Lars Folke Olsen, Bill Schaffer, and their co-workers. Scheeline and I coauthored a Chemical ReViews article13 in 1997 summarizing our best guess at that time as to the mechanism of the PO reaction. This review built on an earlier one12 written by myself, Olsen, and our co-workers; these two review articles should be consulted for more detailed information regarding the work that will be summarized here. Olsen, Schaffer, Scheeline, and others have continued the quest to determine the last few mechanistic details of the PO reaction and other issues involving the dynamic behavior of this system. The interested reader is referred to their more recent articles for further details on these matters.34-40 Although our search for a mechanism that could explain both the periodic and chaotic behavior of the PO reaction was intended to be just the first step in a broader program aimed at determining the categories of chemical kinetic mechanisms that make these behaviors possible, the example that we chose for our first study (the PO reaction) is of some importance in its own right. The PO reaction plays an important role in the physiology of woody plants41 and is the first step in a chain of reactions in this category of plants that eventually produces lignin, a polymer that makes wood hard. In addition to its role in lignin production,42-44 the PO reaction is also involved in the important processes of the photosynthetic dark reactions.45 The reaction is catalyzed by the peroxidase enzyme and is simply the oxidation of NADH; in our studies, we used peroxidase enzyme from horseradish, which turns out to be important in a number of ways outside of its role in horseradish plant physiology. Its major use is in developmental biology as a standard staining reagent46 used to trace out developing neural pathways in immature brain tissue. In addition to this practical role, peroxidases are important because they were among the earliest enzymes to be discovered;41 horseradish peroxidase itself is, furthermore, important because it figured so centrally in the early debates about the importance of metal ions in the active site of enzymes.47 Our group’s interest in the PO reaction was in exploring the possibilities for its use as a well-characterized prototype of a biochemical reaction that displayed a rich variety of nonlinear dynamic behaviors, including one of the earliest examples of

Figure 4. Diagram of the PO reaction experimental setup. The reaction vessel is a 20 × 20 mm2 quartz cuvette inserted into an aluminum holder containing channels for thermostating water and support for the oxygen electrode, which is fitted into the side of the cuvette. The vessel is mounted in a dual-wavelength spectrophotometer for simultaneous measurements of oxygen and NADH. NADH is added by a syringe pump to a solution containing enzyme and cofactors and an O2/N2 gas mixture is blown across the top of the liquid. (Reproduced from Geest, T. et al J. Phys. Chem. 1992, 96, 5678-5680.)

chemical chaos.48,49 Studying its mechanistic features could, thus, provide a window onto the allowed classes of chemical kinetic features that lead to such behavior and would provide a second chemical example for use in checking many of the conclusions which, at the time, were based on the study of a single reaction, the Belousov-Zhabotinsky (BZ) reaction. At the time that we began our in-depth investigation of the PO reaction, the BZ reaction was pretty much the only chemical system known to produce both periodic and chaotic oscillations (others were discovered later). Drawing general insights or more sweeping conclusions about the kinetic features necessary for such behavior was made difficult by having a sample of one, so studies on the PO reaction were crucial for achieving broad understanding of chemical oscillations. Our group was certainly not the only one interested in achieving such a generalized insight. Indeed, shortly after we began our studies of the PO reaction, Irv Epstein and co-workers developed a technique by which they could design inorganic chemical oscillators by combining parts of different redox reactions.50 Their efforts were remarkably successful, and in a very short period of time, they had discovered several new chemical oscillators that exhibited nonlinear behavior equal in complexity to those previously observed only in the BZ and PO reactions. The availability of many chemical oscillators has made it possible to gain deeper insights into the range of possible universal dynamic behaviors and the mechanisms by which these arise and evolve one into another and to determine a great deal about the underlying mathematical principles that seem to be so important in describing the dynamics of these very interesting systems. The general reaction that is termed “the PO reaction” involves the oxidation of organic electron donors by molecular oxygen; the catalyst is horseradish peroxidase (HRP) which, in vivo, also uses peroxide to oxidize substrates. Hence, this reaction is referred to as peroxidase-oxidase, that is, an oxidation with a peroxidase enzyme as the catalyst. The PO reaction is studied in vitro in a semibatch flow reactor (see Figure 4) with reduced nicotinamide adenine dinucleotide (NADH) as the reductant. Thus, the PO reaction as studied in vitro corresponds to the following overall reaction: HRP

2NADH + O2 + 2H+ 98 2NAD+ + 2H2O

(1)

420 J. Phys. Chem. B, Vol. 107, No. 2, 2003

Larter

Under a wide range of conditions, the concentrations of reactants (O2 and NADH), as well as some enzyme intermediates, have been found to oscillate with periods ranging from several minutes to about an hour, depending on the choice of experimental conditions. At this point, it is not known definitively whether the oscillations observed in the flow system have any bearing on behavior in vivo, although Olsen’s group has recently reported51 damped oscillatory behavior in preparations made directly from horseradish root extractions. The elucidation of the mechanism of oscillatory behavior in the PO reaction proceeded along the parallel tracks of experiment and theory. In the theoretical/computational investigations, two general approaches were taken: (1) the development of small models52,53 (typically four variables) from existing biological models that exhibited generally the same type of behavior as that observed in experimental studies of the PO oscillations and (2) the development of detailed models based on rate studies of specific reaction steps. Included in the latter group are models A and C from our group,54,55 the Urbanalator from the Scheeline group,56 and the BFSO model from the Schaffer/Olsen collaboration.57 The first small model was originally proposed by Hans Degn, Lars Olsen, and John Perram52 and was based on the LotkaVolterra model for predator-prey dynamics. This model has come to be known as the DOP model, and our group carried out extensive theoretical and computational studies of the bifurcations that carry this system from one interesting dynamic state to another. Some of these results are reviewed in a later section on the evolution of the torus attractor. Some time after the DOP model was proposed, Olsen suggested53 a slightly modified version that turns out to fit somewhat better with experimental observations. This model, given below, has come to be known as the Olsen 83 model, because he first proposed it in 1983. k1

B + X 98 2X k2

2X 98 2Y k3

A + B + Y 98 3X k4

X 98 P k5

Y 98 Q k6

X0 98 X k′7

zA A0 y\ k -7

k′8

B0 98 B

(2)

In this model, A and B are the two reactants, molecular oxygen, O2(g), and NADH, respectively. A schematic drawing of the typical experimental apparatus used in flow studies of the PO reaction is given in Figure 4. As can be seen, NADH (or B in this model) is pumped into the reaction vessel, a UV-visible cuvette, in which a solution of enzyme with other cofactors and buffer has already been placed. An important cofactor used in the in vitro flow experiments is 2,4-dichlorophenol or DCP. Although its function in the oscillator mechanism is still unclear, this cofactor may play a role similar to that of the monophenols present in vivo, which are the monomers for lignin production. The second reactant, O2(g) (or A in this model), is added by

blowing air across the headspace. The other two variables in this model, X and Y, were originally introduced as two (rather vague but key) intermediates, otherwise undefined. One of our investigations indicated58 that these two variables mimic the dynamics of NAD• (in the case of variable X) and a particular form of the enzyme with bound oxygen known as compound III (variable Y), but it is still not totally clear which two species (or combinations of species) best correspond to these two abstract variables in the actual mechanism. The Olsen model can be studied computationally by converting the mechanism above into a system of four coupled ordinary differential equations (ODEs) corresponding to the rates of reaction of each species. This is done by simply applying the usual laws of mass action kinetics, which results in the following system of equations:

dA ) k7 - k-7A - k3ABY dt dB ) k8 - k1BX - k3ABY dt dX ) k1BX - 2k2X2 + 3k3ABY - k4X + k6 dt dY ) 2k2X2 - k5Y - k3ABY dt

(3)

Here, k7 ≡ k′7A0 and k8 ≡ k′8B0 where A0 and B0 are the concentrations of A and B in the feed streams. To solve systems of equations such as this, one must, of course, first specify the initial values of each of the four variables, as well as values of all of the parameters (here, rate constants and feed stream concentrations). Standard numerical packages can then be used to integrate the equations; we typically use codes employing a Runge-Kutta algorithm59 to solve the equations for this particular model, because they do not involve widely varying time scales as is the case for some chemical oscillators. If different time scales are involved in the chemical oscillator mechanism, the system of ODEs are “stiff” and must be solved with a specialized technique; most investigators use the Gear algorithm60,61 in this case. A typical result of integrating the Olsen model is shown in Figure 5 and illustrates several of the types of oscillations exhibited by this model. Here, we are varying the parameter k3 because it appears to play a role similar to that of the important cofactor DCP. Experiments carried out in collaboration with Olsen’s group showed that variations in the concentration of DCP resulted in changes in oscillation pattern very similar to those observed in Figure 5 when k3 is varied.58 The sequence of oscillatory patterns can be summarized as shown in Figure 6 using a bifurcation diagram in which the maxima and minima for each oscillatory state are plotted as a function of the changing parameter. Figure 6 shows a typical period-doubling cascade from a simple oscillatory state at low k3 values through a periodtwo state, period-four state, and so on until chaos is reached. The broad band of black dots appearing between 0.033 and 0.038 corresponds to a number of chaotic solutions to eq 3. The chaotic band of states abruptly disappears at approximately k3 ) 0.038 and is replaced by a period-three state. This periodthree state is considered to be one of the primary harbingers of chaotic behavior as reflected by the title of a classic paper in the field: “Period Three Implies Chaos”.62 The existence of chaotic behavior in the PO reaction stimulated a number of intriguing questions, for example: If chaos can occur in a single enzyme reaction (such as the PO

Feature Article

J. Phys. Chem. B, Vol. 107, No. 2, 2003 421

Figure 6. Bifurcation diagram for the Olsen model. As the parameter k3 is increased, the model goes through a cascade of period-doubling bifurcations leading to chaos and then to period three. Maxima of the variable A are plotted vs the corresponding k3 value. Other parameter values are as given in ref 58. (Reproduced from Steinmetz, C. G.; Geest, T.; Larter, R. J. Phys. Chem. 1993, 97, 5649-5653.)

Figure 5. Typical behavior of the Olsen 83 model. Here, the concentration of A, the concentration of oxygen, is plotted vs time for three different values of the rate constant k1: (a) 0.41; (b) 0.35; (c) 0.16. See ref 53 for additional parameter values. (Reproduced from Olsen, L. F. Phys. Lett. A 1983, 94, 454-457.)

reaction), does the existence of networks of reactions involving, perhaps, multiple enzymes make chaos inevitable in vivo? (The answer is probably yes.) Is chaos a sign of health or disease? (Here the answer seems to be mixedssometimes it is a sign of health and sometimes disease.) Is chaotic behavior inherently unstable? (The answer to this last question is a resounding no!) And, finally, does the existence and apparent ubiquitousness of chaotic behavior mean we will never be able to understand anything? In other words, if even simple models such as eq 3 produce solutions that are inherently unpredictable, why bother trying to model anything? Answering this final question has required us to develop a more precise definition of what it means to “understand” a system; it no longer means, as Newton had hoped, that when we finally understand a system well enough to write down, precisely, the equations governing its motion, its precise future state at any moment in time can be predicted. Even a deterministic system, one for which all interactions and forces

governing its motion can be described exactly, might not be exactly integrable, sometimes for a large range of parameter values. The strange attractor, which underlies the resulting chaotic state in such a situation (see Figure 7 for an example from our experiments on the PO reaction), allows us to make certain general statements about where the chaotic trajectory is most likely to take the system in the future. However, small uncertainties in the initial conditions in any computation (or, certainly, any associated experiment) will always be amplified in a nonlinear system, rendering it impossible to predict its exact future trajectory. So, when we say we “understand” a nonlinear system, we mean that we are able to predict, for example, the shapes of attractors and sequences of dynamic states that produce them. An example is shown in the bifurcation sequence of Figure 6. An understanding of a nonlinear system also reveals how observables, such as oscillation shape and frequency, depend on parameters in the underlying mechanism. And, most importantly, if at the conclusion of a study of a nonlinear system we can identify the key dynamical features (i.e., key species or steps in a mechanism) that lead to the various dynamical states, we can claim that we now “understand” the system, even though it is inherently impossible to predict its future state in all cases. Only for static or periodic solutions to these systems do we have any hope of predicting future behavior precisely, and so far, it is always the case that any system that exhibits these regular, predictable solutions also exhibits, for closely related conditions, chaotic behavior that can never be predicted precisely. Influence of Nonuniformities and Oscillations on Membrane Transport. One of my earliest research projects as a young faculty member grew out of my Ph.D. thesis, carried out under the direction of Prof. Peter J. Ortoleva at Indiana University. Ortoleva had previously worked with John Ross, then at MIT, in the very early years of the development of nonlinear science before joining the faculty at Indiana, and I was one of his first students there. For my thesis work, we developed a theoretical model for “self-electrophoresis”,63,64 a term that we coined to describe the mechanism that we were proposing for the establishment of a transcellular electric field

422 J. Phys. Chem. B, Vol. 107, No. 2, 2003

Larter

Figure 7. Phase portraits for the PO reaction. Experimental data are plotted using a time delay reconstruction technique for measured oxygen concentrations. Plots shown are for [O2](t + 6 s) versus [O2](t). The different plots correspond to different concentrations of 2,4-dichlorophenol: (a) 20; (b) 25; (c) 30.8; (d) 32.2 µM. Other conditions are given in Geest, T.; Steinmetz, C. G.; Larter, R.; Olsen, L. F. J. Phys. Chem. 1992, 96, 5678.

across the fertilized egg of the alga Fucus. This transcellular field works to sort proteins in the membrane toward opposite poles of the cell, ensuring that the two resulting daughter cells are distinctly different after the first cell division. The initial development of this nonuniform electric field is, thus, the first step in the morphogenesis of Fucus and is, therefore, of key importance in the development of multicellular life from a single cell. Our approach treated the system as an example of the type of pattern-formation mechanism proposed by Alan Turing in his now classic 1952 paper entitled “On the Chemical Basis of Morphogenesis”.65 This personal note is included to point out two things: one, the extremely important influence of Turing’s ideas regarding the chemical basis of biological form and, two, an illustration of the way research questions evolve in the field of nonlinear science. In carrying out research on our project, Ortoleva and I discovered, often buried in obscure places in the literature, many other examples of nonuniform electric fields, which seemed to play key roles in developing and growing biological systems. The examples that we found ranged from transcellular fields in fertilized egg cells (such as Fucus and others66) to currents pouring from the tips of sprouting root hairs in beans, carrots, and other plants.67 Transcellular electric fields have also been observed near the tips of regenerating amphibian and rodent limbs,68-73 and there is even some indication of something similar occurring in human children.74,75 It is now standard medical practice, for example, to apply an external electric field to accelerate the knitting of broken bones.76 These and many other similar examples soon led to an intriguing question: Is there some inherent biological advantage to an organism that generates a nonuniform (as opposed to uniform) electric field near its growing or developing region? Does the nonuniform field (which results in an electrical current) enhance crucial biological processes in some way? One might speculate,

for example, that an enhancement of transport of the raw materials needed for growth or wound healing might occur in the presence of a nonuniform field. To narrow this question to something more amenable to study, we focused on the possibility that nonuniform fields might enhance transport across a planar membrane. A purely theoretical analysis77 indicated that they should in a particular situation: when the flux associated with transport obeyed a nonlinear rate law. In the case of a linear flux equation, this analysis showed that spatial nonuniformities should have no effect on the average rate of transport; in the case of nonlinear flux laws, the transport rate can be either enhanced or reduced, depending on the specific pattern of nonuniformity. This theoretical analysis was followed by an experimental investigation78 designed to test the predictions of the model. This project was my first foray into the world of experimental science, a humbling but very important step for any theoretician or computational scientist to takesand I would suggest that eVery theoretician should, at least, collaborate closely with experimentalists. Experimental verification of a rather pure and clean theoretical model is the best (and maybe only) way to learn that our models are always only crude approximations to realitysand that many seemingly straightforward experimental tests of the predictions of theory are anything but! The experiment that we developed involved a membrane transport cell consisting of two compartments separated by an ion-exchange membrane; we used Nafion in our original experiments but later extended our studies to other types of membranes. Initially identical KCl solutions were placed in either compartment, and we used two arrangements (see Figure 8) of a Pt wire grid electrode on either side to drive a current through the membrane; OH- is believed to be the charge carrier here. The flux can, thus, be measured by following the change of pH with time in one of the compartments. Our experiments

Feature Article

J. Phys. Chem. B, Vol. 107, No. 2, 2003 423

Figure 8. Pt-wire grid electrodes used to impose uniform and nonuniform fields (reproduced from Kuntz, W. H.; Larter, R.; Uhegbu, C. E. J. Am. Chem. Soc. 1987, 109, 2582).

showed78 that the flux did, in fact, increase in the presence of a nonuniform applied electric field as compared to that measured in the presence of a uniform applied field; the increase was on the order of 8-10% depending on the magnitude of the applied potential (here about 2 V). To ensure that our analysis was valid, we imposed nonuniform fields the spatial average of which was equal to that of the uniform field with which we made the direct comparison. These intriguing experimental results encouraged us to develop a more detailed and specific theoretical model79 than the one we had initially studied to understand what causes this flux enhancement. The more detailed model involved a finitedifference solution of the Nernst-Planck equations80 for Fickian diffusion of ions:

∂ci ∂2ci ∂ ∂V ) Di 2 + Mi ci ∂t ∂z ∂z ∂z

( )

i ) 1, ..., N

(4)

where ci is the concentration of species i, Di is its diffusion coefficient in a one-dimensional (z) system, and Mi is its mobility defined as Mi ) DiRT/F. Finally, V is the electrical potential, so the last term, involving its derivative, is proportional to the electric field. An analytically integrated form of the Nernst-Planck equations, known as the Goldman equation,81 has been used extensively in neurophysiology to describe the flux of ions in and through neuronal membranes and was used, as well, in this theoretical study. Our analysis via either the finite difference solution or the integrated Goldman form yielded the same general conclusion: the flux enhancement observed in our experiment was, indeed, due to the nonuniformity of the applied electric field. A key requirement is, again, that the system obey a nonlinear flux equation; in the Nernst-Planck formulation, the nonlinearity occurs in the second term of eq 4 involving the product of the electric field (-∂V/∂z) and the concentration profile (ci). These early experiments and computational studies of systems involving membrane transport soon led our group to a series of experimental investigations of membrane oscillators. These studies were initially stimulated by a suggestion by John Ross that Nature selects for biochemical networks that exhibit stable oscillations or temporal patterns (such as the glycolysis system) because a higher efficiency or throughput in the network can

Figure 9. Examples of oscillatory membrane potentials observed for a Millipore filter doped with DOPH and alcohol. Different patterns are observed for variations in applied pressure: (a) 32; (b) 32.8; (c) 33.2; (d) 33.3 mmHg (reproduced from Kim, J. T.; Larter, R. J. Phys. Chem. 1991, 95, 7948).

be achieved for the oscillatory state.28 Our results on nonuniformities suggested a possibly similar role for spatial patterns (i.e., that Nature might select for these patterns because they enhance transport). Therefore, we initially set out to determine whether temporal patterns, that is, oscillations, might also enhance transport. The membrane oscillators that we studied experimentally82,83 were constructed by doping filter paper with a lipid-like substance, dioleyl phosphate or DOPH. Kenichi Yoshikawa’s group (then at Nagoya University) had previously studied these and related systems84 and found interesting nonlinear behavior. We incorporated these membranes into a flow-cell arrangement, similar to that used in our earlier experiments, but now involving both an applied pressure gradient and an imposed current. Fast, often very complex, oscillations were observed in the transmembrane potential (see Figure 9 for examples of typical observed behavior). We carried out standard nonlinear dynamics analyses of these data,83 embedding the trajectories in phase space to observe the underlying attractor, constructing the nextreturn map, and computing the associated correlation dimension of what appeared to be chaotic oscillations in this system. We found this system to be very amenable to a nonlinear dynamics analysis and to provide ample evidence that what appears to be a simple membrane transport process can yield very exotic dynamics. Unfortunately, we never carried this research far enough to find an answer to the question which stimulated it: do temporal patterns, that is, oscillations, in membrane potential enhance transport? A theoretical investigation that I carried out at about this time in collaboration with Antonio Raudino of the University of Catania indicated85 that an oscillatory field should enhance, even increase, the turnover rate for certain enzyme-

424 J. Phys. Chem. B, Vol. 107, No. 2, 2003 catalyzed reactions. This intriguing possibility remains untested experimentally. More recent work on the origin and influence of spatial inhomogeneities on membrane transport has been carried out by several groups,86-91 and these papers should be consulted for more up-to-date information on this topic. Calcium Signaling and Neuronal Dynamics. The reason that our group was never able to complete the experiments needed to determine whether transport is enhanced by oscillation (still an intriguing question) is that we got distracted by equally intriguing biological questions that arose in the course of our investigation. Some of the most interesting examples of oscillatory membrane potentials are those observed in neuronal systems. These electrically excitable cells exhibit action potentials (i.e., spikes of potential) and, under some conditions, sustained oscillatory behavior, that is, trains of action potentials. While neurons are perhaps the most intriguing cell type that exhibits membrane potential oscillations, other types of cells do as well: cardiac cells, for example, particularly those in the sinoatrial node or natural pacemaker center for the heart, also exhibit rhythmically pulsing membrane potentials.29 Another example are the so-called β-islet cells in the pancreas, which exhibit complex bursting oscillations1,2,92 of potential, which are concomitant with oscillatory calcium concentrations in the cell interior. This bursting behavior, furthermore, seems to be associated with pulsatile insulin secretion by the pancreas. In the early 1990s, a large number of reports began to appear in the literature of interesting oscillatory calcium dynamics in many different cell types.5,6,9,93-95 In addition to oscillatory calcium signaling, it had also been observed at about this time that some cells (e.g., Xenopus oocyte20) develop spiral waves of calcium that are strikingly similar to those observed in wellstudied chemical oscillators,93 such as the Belousov-Zhabotinsky reaction. The mechanisms that were being proposed and explored at the time (the calcium-induced calcium-release, CICR, model proposed by Berridge, for example) were determined by looking for possible mechanistic steps that might dynamically mimic the important mechanistic steps in an oscillatory reaction. The elucidation of these calcium signaling mechanisms was, then, positively influenced by the insights arrived at from extensive previous studies of chemical oscillators. Calcium signaling is a widespread phenomenon of extreme importance in biological function. Calcium bursts within the cell serve to transmit information from the cell surface, where receptors may detect the existence of hormones or other extracellular signals, to the cell interior. Calcium bursts turn on many important processes within the cell,3,96-98 not the least of which is protein synthesis, which is triggered when a calcium signal is detected by the nucleus. Because calcium signaling so often occurs in a pulsatile, that is, oscillatory, manner, it has been suggested that the temporal pattern may, somehow, carry information99 that allows calcium to go beyond serving as a mere switch. In other words, the frequency or burst pattern of the oscillations may contain a message. Calcium signaling is extremely important in all cell typess muscle, liver, cardiac, egg, etc.sbut plays a crucial role in neuronal systems, particularly in the formation and strengthening of synapses.100-102 While many of the molecular and cellular events involved in synapse formation are still not wellunderstood, it is generally agreed that the role of calcium pulses is central. Nevertheless, the origin of these calcium pulses in neurons has not often been looked at through the eyes of a

Larter nonlinear dynamicist. A nonlinear dynamicist would focus on the calcium oscillations so central to proper synaptic communication and consider them as indicative of a possible underlying chemical oscillator. Their source, then, would probably reside in a positive feedback process as part of a mechanism that is similar (dynamically) to that observed in other chemical oscillators. Furthermore, the role that this potential chemical oscillator plays in the network of feedback processes that govern the intricate dynamics of the brain should be critical, but the role of these chemical oscillators in the brain has not been widely investigated at this time. Positive and negative feedback processes abound in our brains.103 Excitatory neuronal connections mediated by neurotransmitters such as glutamate are the most common positive feedback elements; inhibition exists when other types of neurotransmitters (such as γ-amino-butyric acid, GABA) or even other types of receptors are operative. Because each neuron can, in principle, form thousands of connections of both types with other neurons, the situation can be very complex indeed. One might despair at ever being able to sort out the multitude of interactions present in such a system. Nonlinear dynamics can really help in a situation like this, because nonlinear dynamics is a formalism particularly well-suited to sorting through a complex system and determining the important mechanistic features that give rise to complex behavior. One of our group’s first attempts to apply nonlinear dynamics techniques to the very complex system of the brain was done in collaboration with Dr. Robert Worth, a neurosurgeon and biophysicist at our medical school. Dr. Worth’s clinical practice specializes in the surgical treatment of patients with a form of epilepsy that is associated with a small region of abnormal tissue (generally resulting from injury or oxygen deprivation), usually located in the right temporal lobe.104-106 The patients who end up in surgery have been treated with progressively stronger antiseizure medications, all of which have ceased to control their seizures.107 The impetus for our work was to better understand the mechanisms by which these types of seizures arise and spread to contribute to the development of possible new medical interventions that might avoid this drastic surgery (which is not without serious side effects108). Our research involved the development of a model109 that emphasized the balance of positive and negative feedback elements in a region of the temporal lobe. Clinicians had long believed that patients who exhibited complex partial epileptic seizures, as Dr. Worth’s patients do, suffered from an imbalance of positive and negative feedback,110 in short, the positive feedback is either too strong or the negative feedback (inhibition) too weak. This leads to a run-away effect causing the amplification of abnormal signaling in the damaged tissue. The process by which this amplified signal spreads through the otherwise normal tissue of the brain to eventually recruit all neurons in the entire brain to fire along with it is still very much unknown. The model that we explored was a variant of that proposed over 50 years ago by Hodgkin and Huxley.111,112 The original Hodgkin-Huxley model involved four variables, but a two variable reduction retains many of its same features and is sufficient for the description of neuronal dynamics of the type that we were considering.113 We added a third variable to model a population of inhibitory neurons so that we could test the hypothesis regarding the need for balance between excitation and inhibition.114 The resulting set of ordinary differential equations are as follows:

Feature Article

J. Phys. Chem. B, Vol. 107, No. 2, 2003 425

dVi ) gCam∞(Vi - 1) - gKWi(Vi - ViK) dt gL(Vi - VL) + I - RinhZi dWi φ(w∞ - Wi) ) dt τw dZi ) b[cI + RexcVi] dt

(5)

The first two equations model the dynamics of a population of pyramidal neurons; the variable Vi is the average membrane potential at lattice site “i” in this model, while Wi is the fraction of open potassium channels in these neurons. The first two equations are essentially the two-variable Hodgkin-Huxley model; the third equation describes the membrane potential Zi of a group of inhibitory neurons connected synaptically to this first set. The parameters in this model are Nernst potentials, ion conductances, and synaptic strengths; their precise definitions can be found in the original paper.109 We were able to show that the hypothesis regarding a balance of excitation and inhibition generally held, although the model, like all models of this type, produces complex dynamics for a wide range of conditions. The more intriguing question involved the spread of an abnormal signal through healthy tissue. What sorts of processes, perhaps synaptic in nature perhaps not, might contribute to this spread? It is known clinically that the spread of a seizure is relatively slow, at least compared to the speed at which an action potential travels along an axon. Some investigators had suggested that a phenomenon known as spreading depression (essentially a slow-moving wave of K+, which tends to depolarize the neurons in its wake, reducing their tendency to fire) was similar to, but opposite, that of a spreading wave of excitation.115-118 We developed a spatially extended model to test the possibility that potassium ion diffusion might be involved.109 Equation 5 was defined on a lattice, as discussed above, and coupled through the Nernst potential for potassium (ViK), which is taken to be proportional to the time-average potential of the pyramidal cells located at the nearest-neighbor lattice sites:

V hj )

1 T

∫t t /6Vj(t) dt int

(6)

int

Here, the parameter tint defines the interval of time that the equations at each lattice site, eq 5, are allowed to run autonomously before a stepwise potassium diffusion event is allowed to occur. One of the advantages of computational science is that the investigator can test scenarios that would be impossible to realize experimentally, and what we did in this case was to allow the diffusion velocity (i.e., the inverse of the time tint) for potassium to vary over ranges of values that would be unrealistic in the brain at a relatively constant temperature of 37 °C. At a certain critical diffusion velocity, we found that our model developed “seizures”, that is, synchronized firing patterns that covered the entire “model brain”. Below this velocity, that is, at lower coupling strength values, we found normal, that is, nonsynchronized behavior. Interestingly, the switch-over point was a velocity very close to that of spreading depression, as earlier investigators had suggested. The lay press found these results intriguing119-122 because our results suggested a mechanism for an otherwise puzzling medical phenomenon: patients with temporal lobe epilepsy do not, of course, have seizures continuously but only at random

intervals. This is difficult to understand given the fact that the focal abnormality is present always. Our model implicated some sort of coupling mechanism involving a variable velocity, which would explain why, when the velocity passes through a critical value, a seizure may come on unexpectedly. What this variable velocity is, however, is still unknown. Also interesting is the fact that this critical velocity, while similar to that of spreading depression, is also close to that of calcium waves recently observed in brain tissue samples.18,123,124 These calcium waves actually travel through a tightly linked field of cells known as glia, which surround the neurons. Glia were traditionally thought to only provide nourishment and mop up excess neurotransmitter125 around neurons, that is, play a role that is merely supportive of the “active” cells. Very exciting new results indicate that these glia cells are actually active participants in communication processes within the brain.124,126-133 Application of neurotransmitter to glial cells elicits a response in the form of an oscillatory calcium signal and concomitant waves.134 Intriguingly, one type of glial cell, the astrocyte, can also secrete neurotransmitter,96,135-141 so a very interesting hypothesis has arisen in the literature: neurons (comprising approximately 20% of brain cells) and glia (the other 80%) are actually involved in bidirectional communication processes127,128,130,132,142 in normal brain function. Our modeling results suggested to us another possibility: when these communication processes go awry, neurological problems, such as seizures, might be the result. Of special interest to our research group is the possibility that a chemical oscillator involving calcium might be at the heart of all of these important neurological phenomenasboth disease states and healthy functions of the brain. Recent work in our group143 has shown that calcium oscillations in glial cells may be influenced by a second important feedback mechanism: a glutamate-induced glutamate-release (GIGR) process similar dynamically to Berridge’s original CICR mechanism. Because the glutamate released by glial cells can be detected by receptors on nearby neurons, this process may play an important role in neuron-astrocyte communication. The proposed GIGR mechanism provides a communication link between neurons and cells long thought to be inactive in the central nervous system. Our results, further, show that the glutamate feedback process may serve to drive the internal calcium oscillator in glia into complex bursting modes, rendering the glial cells capable of encoding information.144 Bursting oscillations in cytoplasmic calcium have been implicated in an important step in synapse formation: the ratcheting up in activity of the CaMKII enzyme, which is an important component of the calcium signal decoding machinery.99 When calcium bursts occur the enzyme is able to reach higher levels of activation (phosphorylation) with each subsequent high-frequency oscillation of calcium;102,145,146 thus, a burst with only two high-frequency oscillations would not lead to as high an activation level as a burst with three highfrequency oscillations. The role that this chemical oscillator plays in the brain, then, may be of central importance in synapse formation and, hence, fundamental to the way we think and learn. Development of Mathematical Tools to Explore Rhythmic Behavior Although the research projects in which our group has participated have been generally focused on specific biological questions (oscillatory behavior in a key reaction sequence in woody plants, the PO reaction, for example, or calcium signaling in glial cells in the brain, just two of the specific problems on

426 J. Phys. Chem. B, Vol. 107, No. 2, 2003 which we have worked), the end results of this research have often been mathematical tools or theoretical insights or both that have a wider applicability beyond the specific biological system that was used as a vehicle for our study. Some of our early work on mathematical tools in nonlinear science contributed to the development of sensitivity analysis147-151 and stoichiometric network analysis152 particularly in the area of application to chemical kinetics that lead to oscillatory behavior. Space considerations prevent a detailed discussion of this older work, so I will concentrate on only one topic in this category: the role of a torus attractor in the development of chemical chaos. The Quasiperiodic Route to Chaos. At the time that we were investigating the dynamics of the PO reaction both experimentally and theoretically, one of the main questions in the chemical oscillator community was what leads to chaos in a chemical system? The routes by which chaos arose in other systems did not always correspond, in detail, to the routes to chaos observed in chemical systems. For example, one of the simplest models that gives rise to chaos, the logistic map, shows chaos arising by a simple cascade of period-doubling bifurcations,153,154 first period-two, then period-four, and then periodeight, until eventually the period has doubled an infinite amount of times and we have chaos. These period-doubling cascades were observed in both models and experimental studies of chemical oscillators, but the sequence of bifurcations was not as simple as that in the logistic map. Also observed along the way to chemical chaos were so-called quasiperiodic oscillations (i.e., those confined to the surface of a toroidal attractor) and mixed-mode oscillations. The latter are periodic states with an often very high degree of complexity; the repeating unit in one of these states can consist of dozens of alternating large and small peaks. It is truly amazing that the chemical reaction can, somehow, “remember” this long, complex pattern and repeat it perfectly many times, but this is precisely what happens, as has been demonstrated over and over both experimentally and in computational simulations.155-159 Even more intriguing, these mixed-mode states are often arranged in an interesting sequence known as a Farey sequence or the closely related devil’s staircase;160,161 these sequences have a basis deep within number theory as illustrated by the fact that the steps within the staircase follow the Fibbonaci sequence. One of the earliest proposed models of the PO reaction, the DOP model,52 provided some interesting insights into the relationship between mixed-mode oscillations and chaos in chemical oscillators and helped us, and others, understand a little bit more about the way chaos develops in chemical systems. This four-variable model, similar to the Olsen 83 model53 described earlier, reproduces a range of behaviors observed experimentally for this reaction including simple (single-frequency) oscillations, bursting oscillations (in which high-frequency spikes are superimposed on a lower-frequency envelope), and mixed-mode oscillations comprised of complex periodic sequences of large and small amplitude peaks. In addition, chaos is observed over certain windows of parameter values, often found between the mixed-mode states, that is, between the steps in the devil’s staircase. A careful analysis of the bifurcations that arise in the DOP system was carried out by Curt Steinmetz, then a doctoral student in the group. He located the curves in parameter space at which Hopf bifurcations occur using a computer program known as AUTO,162 which, by then, was a standard analysis tool in the field. This code uses a technique known as the

Larter continuation method163 to follow bifurcations through parameter space and, hence, separate regions in which one sees various stable behaviors, for example, steady state from limit cycle from quasiperiodic behavior, etc. A primary Hopf bifurcation occurs when a steady state becomes unstable and gives rise to a simple limit cycle; we found, within the primary Hopf curve, a curve corresponding to a secondary Hopf bifurcation.164 When this secondary Hopf occurs, a second frequency is stabilized corresponding to a limit cycle that is perpendicular to the first in phase space; the result is a torus attractor. Trajectories confined to the surface of the torus can be either quasiperiodic (if the two frequencies are incommensurate) or periodic (if they are commensurate). Chaotic behavior was always observed to arise after the bifurcation to the torus; hence, the route to chaos observed in this and other similar systems has come to be known as the quasiperiodic route to chaos. Earlier theories had suggested32 that chaos might arise from a torus attractor if a third Hopf were to occur, forming a hypertorus; however, no evidence for such a transition has been observed, at least in chemical systems exhibiting chaos. Rather, we confirmed a transition to chaos involving a wrinkling, breaking, and eventual fractalization of the torus attractor;164 this sequence must now be considered as a universal sequence because it has been observed in several experimental scenarios, as well as quite a few models.157-159,165-168 The sequence is easily visualized by taking a slice, called a Poincare´ surface of section, through one arm of the torus; the sequence found for the DOP system is shown in Figure 10. The parameter that is varied in this sequence is k1, the rate constant for the first reaction step, which is considered to be proportional to the amount of catalyst (peroxidase enzyme). The four stages of the torus attractor correspond to four different stages of behavior; in the first stage, quasiperiodic behavior is observed with trajectories traveling over a smooth torus. In the second, the torus becomes wrinkled; this wrinkling is not a true bifurcation because the same type of oscillations (i.e., quasiperiodic) occur on the wrinkled torus as on the smooth one. However, the wrinkling corresponds to a striking transition in the circle map extracted from the cross section in Figure 10. The circle map is easy to construct: an origin is placed at the center of the Poincare´ section so that an angle can be assigned to every point along the section. The circle map is just the plot of the angle of the (n + 1)th crossing against the nth. When the torus wrinkles, the circle map develops a bump. In the next stage (the fractal torus stage), this bump turns into an inflection point, indicating a region of noninvertibility in the map and, hence, the possibility of chaos. So, the third stage of the torus attractor, the so-called fractal torus in which the wrinkles have become so severe that the surface is no longer smooth and hence not defined as a two-dimensional object, corresponds to the transition to chaos. In the fourth stage, the broken torus, the chaotic states are now found interspersed between a large number of mixed-mode states; chaotic behavior also takes up a larger amount of the parameter space in the broken torus stage. We found that the mixed-mode states were arranged in incomplete Farey sequences156 and that chaos arose when one (often very complex) state went through a period-doubling cascade into chaos. The existence of the Farey sequence had, by this point, been observed in other systems,158,159,165-168 but it was not easy to see the connection between chaos and the sequence. The behavior exhibited by the DOP torus attractor in its fractal and broken stages is strikingly similar to that found in many other systems: the BZ159,165,166,168 and other chemical oscillators169

Feature Article

Figure 10. Four stages of the torus attractor for simulations with the DOP model. The torus is shown in cross section as the rate constant k1 is varied: (a) 0.001; (b) 0.036; (c) 0.0426; (d) 0.0882. The torus in stage a is a normal, smooth torus but becomes wrinkled in stage b, fractal in stage c, and broken in stage d. Chaotic behavior is observed in stages c and d but not in stages a and b (reproduced from Steinmetz, C. G.; Larter, R. J. Chem. Phys. 1991, 94, 1388).

and several electrochemical systems.157,167 Groups involved in the experimental study of these systems included those of Harry Swinney, Jerzy Maselko, and Irv Epstein, among others. Mark Schell, Dwight Barkley, John Ringland, and others contributed, as well, to the theoretical understanding of this important transition to chaotic behavior. Conclusions My interest in understanding the origins and efficacy of biological rhythms grew out of an earlier, and broader, curiosity about the incredible diversity of form and dynamic behavior observable in living systems. As a graduate student in physical chemistry, I initially indulged this curiosity by following the traditional route of study (downward in Figure 1), delving deeper and deeper into the fundamentals of molecular and atomic structure. It was my hope, of course, that by doing so I might get a glimpse of the origin of this incredible diversity, the most

J. Phys. Chem. B, Vol. 107, No. 2, 2003 427 striking characteristic of life in the world around us. However, I soon realized that this hope was not to be satisfied and, rather, found myself-awedsand quickly overwhelmedsby the complexity of nature even at the near-bedrock level, that of atoms and molecules. Even at this “low” level of organization, the complexity of nature is astounding. As a young student interested in exploring the complexity of life, I was discouraged by what seemed to be the necessity to narrow my focus, to zero in on smaller and smaller slices of the universe before I could even begin to consider the question of how the diversity of atomic structure and molecular form gets assembled into a single living cellsnot to mention an entire organism. I also wondered what my chosen field of study, physical chemistry, had to say about the existence of the overwhelming complexity seen in living systems; even the basic concepts of thermodynamics seemed to argue against the evidence presented by my own eyessthat living things definitely exist and are very, very complex, both spatially and temporally. Until Ilya Prigogine and Gregoire Nicolis, along with many other early workers in the field of nonlinear science, finally explained why the Second Law of Thermodynamics is not inconsistent with the development of pattern and form170,171 in thermodynamically open systems (such as living cells or even whole organisms), the complexity of life was considered to be something that was outside our realm of understanding as physical scientists. Prigogine and Nicolis’ work (for which Prigogine was awarded the Nobel Prize in 1977) showed that the Second Law did not forbid the existence of the kind of structural and dynamic diversity that we see in biology, but they also cautioned that thermodynamics alone could not explain from where all this diversity came, either.171 The timing of this particular prize, awarded during my second year in graduate school, caught my interest and directed my attention toward a brand new field of study that spanned the areas of physics, physical chemistry, and several other disciplines; we would eventually refer to this field of study as nonlinear science or, more recently, as complexity theory.172,173 Research in this field typically covers a broad range of disciplines as reflected by the diverse backgrounds of participants at conferences in the area. It is not uncommon to find physical chemists working alongside engineers, mathematicians, biologists, medical doctors, and even social scientists in this field. Nevertheless, many of the early fundamental studies of the physicochemical mechanisms that lead to spatial and temporal pattern formation in systems far from thermodynamic equilibrium were, in fact, carried out by physical chemistry groups. These early fundamental studies have broadened to now include applications to many different fields in chemistry (catalysis and atmospheric chemistry, for example) and beyond (biological development, neuroscience, economics, and so forth). It is, of course, impossible to do justice to such a broad area of investigation in one short article, but I hope I have whetted the reader’s appetite to learn more about these topics. Many important papers in the field are published in this very journal and similar journals devoted to research in physical chemistry, but several newer specialty journals also exist (such as Chaos, The International Journal of Bifurcation and Chaos, AdVances in Complex Systems, etc.). Since the early work of Prigogine, Nicolis, and others, which laid the thermodynamic foundation for this field, many additional theoretical and conceptual tools for understanding the origin and evolution of complexity, both temporal and spatial, have been developed. In many aspects, though, much mystery still remains; we are only beginning to scratch the surface in

428 J. Phys. Chem. B, Vol. 107, No. 2, 2003 our search for explanations of the origins of what, in one way of looking at it, is really the secret of life itself. To that end, we can only hope to make our best attempt and enjoy the process of discovery, but we will probably never find the full answer to the intriguing question that Schro¨dinger174 once posed: “What is life?” Acknowledgment. Many people must be acknowledged for contributing to the work described here. First and foremost are my mentors, Profs. Peter Ortoleva and Herschel Rabitz, who introduced me to this area and gave me guidance when it was most needed. My students have contributed not only in terms of hard work but also by asking good questions and having creative ideas at crucial times; these individuals include B. Aguda, C. Steinmetz, J. Kim, W. Kuntz, M. Klein, T. Geest, B. Speelman, J. Wu, P. Shen, D. Thompson, S. Hemkin, N. van Riesenbeck, L. Doepken, M. Glendening, R. Tuggle, T. Lonis, J. Patel, P. Dempsey, J. Ambern, A. Tomlinson, C. Uhegbu, M. Albrecht, A. Hyre, M. Coward, L. Seagraves, P. Oatts, C. Bush, D. Schwomeyer, L. Cook, P. Eyster, and R. Tinsley. Finally, I gratefully acknowledge financial support over the years from Research Corporation, the Petroleum Research Fund, the National Science Foundation, and the IUPUI Research Investment Fund. References and Notes (1) Gilon, P.; Shepherd, R.; Henquin, J.-C. J. Biol. Chem. 1993, 268, 22265. (2) Longo, E. A.; Tornheim, K.; Deeney, J. T.; Varnum, B. A.; Tillotson, D.; Prentki, M.; Corkey, B. E. J. Biol. Chem. 1991, 266, 9314. (3) Berridge, M. J.; Cobbold, P. H.; Cuthbertson, K. S. R. Philos. Trans. R. Soc. London, Ser. B 1988, 320, 325. (4) Berridge, M. J.; Galione, A. FASEB J. 1988, 2, 3074. (5) Berridge, M. J. J. Biol. Chem. 1990, 265, 9583. (6) Cobbold, P. H.; Sanchez-Bueno, A.; Dixon, C. J. Cell Calcium 1991, 12, 87. (7) Cuthbertson, K. S. R.; Chay, T. R. Cell Calcium 1991, 12, 97. (8) Modelling the dynamics of biological systems: Nonlinear phenomena and pattern formation; Mosekilde, E., Mouritsen, O. G., Eds.; SpringerVerlag: Berlin, 1995; Vol. 65, p 294. (9) Goldbeter, A. Biochemical oscillations and cellular rhythms; Cambridge University Press: Cambridge, U.K., 1996. (10) Hess, B.; Boiteux, A. Annu. ReV. Biochem. 1971, 40, 237. (11) Hess, B. Q. ReV. Biophys. 1997, 30, 121. (12) Larter, R.; Olsen, L. F.; Steinmetz, C. G.; Geest, T. Chaos in biochemical systems: the peroxidase reaction as a case study. In Chaos in Chemical and Biochemical Systems; Field, R. J., Gyo¨rgyi, L., Eds.; World Scientific Press: Singapore, 1993; p 175. (13) Scheeline, A.; Olson, D. L.; Williksen, E. P.; Horras, G. A.; Klein, M. L.; Larter, R. Chem. ReV. 1997, 97, 739. (14) Epstein, I. R.; Pojman, J. A. An Introduction to Nonlinear Chemical Dynamics: Oscillations, WaVes, Pattern and Chaos; Oxford University Press: Oxford, U.K., 1998. (15) Nicolis, G. Introduction to Nonlinear Science; Cambridge University Press: Cambridge, U.K., 1995. (16) Anderson, P. W. Science 1972, 177, 393. (17) Field, R. J.; Burger, M. Oscillations and traVeling waVes in chemical systems; John Wiley & Sons: New York, 1985. (18) Harris-White, M. E.; Zanotti, S. A.; Frautschy, S. A.; Charles, A. C. J. Neurophysiol. 1998, 79, 1045. (19) Tyson, J. J.; Murray, J. D. DeVelopment 1989, 106, 421. (20) Lechleiter, J.; Girard, S.; Peralta, E.; Clapham, D. Science 1991, 252, 123. (21) Feigin, A. M.; Konovalov, I. B. J. Geophys. Res. 1996, 101, 26023. (22) Field, R. J.; Hess, P. G.; Kalachev, L. V.; Madronich, S. J. Geophys. Res., [Atmos.] 2001, 106, 7553. (23) Johnson, B. R.; Scott, S. K.; Tinsley, M. R. J. Chem. Soc., Faraday Trans. 1998, 94, 2709. (24) Nitzan, A.; Ortoleva, P.; Deutch, J.; Ross, J. J. Chem. Phys. 1974, 61, 1056. (25) Nitzan, A.; Ortoleva, P.; Ross, J. Proc. Faraday Symp., Chem. Soc. 1974, 9, 241. (26) Rapp, P. E. J. Exp. Biol. 1979, 81, 281. (27) Mair, T.; Mu¨ller, S. C. J. Biol. Chem. 1996, 2171, 627. (28) Richter, P. H.; Ross, J. Science 1981, 211, 715.

Larter (29) Glass, L.; Mackey, M. From Clocks to Chaos: The Rhythms of Life; Princeton University Press: Princeton, NJ, 1988. (30) Chemical WaVes and Patterns; Kapral, R., Showalter, K., Eds.; Kluwer Academic Publishers: Amsterdam, 1995. (31) Murray, J. D. Mathematical Biology, corrected second printing ed.; Springer-Verlag: New York, 1990. (32) Ott, E. Chaos in Dynamical Systems; Cambridge University Press: Cambridge, U.K., 1993. (33) Scott, S. K. Chemical Chaos; Oxford University Press: Oxford, U.K., 1991. (34) Bronnikova, T. V.; Schaffer, W. M.; Olsen, L. F. J. Phys. Chem. B 2001, 105, 310. (35) Hauser, M. J. B.; Kummer, U.; Larsen, A. Z.; Olsen, L. F. Faraday Discuss. 2001, 120, 215. (36) Kirkor, E. S.; Scheeline, A. Eur. J. Biochem. 2000, 267, 5014. (37) Kirkor, E. S.; Scheeline, A.; Hauser, M. J. B. Anal. Chem. 2000, 72, 1381. (38) Kirkor, E. S.; Scheeline, A. J. Phys. Chem. B 2001, 105, 6278. (39) Olsen, L. F.; Lunding, A.; Lauritsen, F. R.; Allegra, M. Biochem. Biophys. Res. Commun. 2001, 284, 1071. (40) Schaffer, W. M.; Bronnikova, T. V.; Olsen, L. F. J. Phys. Chem. B 2001, 105, 5331. (41) Peroxidases in Chemistry and Biology; Everse, J., Grisham, M. B., Everse, K. D., Eds.; CRC Press: Cleveland, OH, 1991. (42) Halliwell, B. Planta 1978, 140, 81. (43) Ma¨der, M.; Fu¨ssl, R. Plant Physiol. 1982, 70, 1132. (44) Ma¨der, M.; Amberg-Fisher, V. Plant Physiol. 1982, 70, 1128. (45) Pantoja, O.; Willmer, C. M. Planta 1988, 174, 44. (46) Peters, A.; Palay, S. L.; Webster, H. d. F. The Fine Structure of the NerVous System: Neurons and Their Supporting Cells, 3rd ed.; Oxford University Press: New York, 1991. (47) Dunford, H. B. Horseradish peroxidase: structure and kinetic properties. In Peroxidases in Chemistry and Biology; Everse, J., Everse, K. E., Grisham, M. B., Eds.; CRC Press: Boca Raton, FL, 1991; Vol. 2, p 1. (48) Olsen, L. F.; Degn, H. Nature (London) 1977, 267, 177. (49) Olsen, L. F. Z. Naturforsch. 1979, 34A, 1544. (50) Orba´n, M.; DeKepper, P.; Epstein, I. R.; Kustin, K. Nature 1981, 292, 816. (51) Moller, A. C.; Hauser, M. J. B.; Olsen, L. F. Biophys. Chem. 1998, 72, 63. (52) Degn, H.; Olsen, L. F.; Perram, J. W. Ann. N. Y. Acad. Sci. 1979, 316, 623. (53) Olsen, L. F. Phys. Lett. A 1983, 94, 454. (54) Aguda, B. D.; Larter, R. J. Am. Chem. Soc. 1990, 112, 2167. (55) Aguda, B. D.; Larter, R. J. Am. Chem. Soc. 1991, 113, 7913. (56) Olson, D. L.; Williksen, E. P.; Scheeline, A. J. Am. Chem. Soc. 1995, 117, 2. (57) Bronnikova, T. V.; Fed’kina, V. R.; Schaffer, W. M.; Olsen, L. F. J. Phys. Chem. 1995, 99, 9309. (58) Steinmetz, C. G.; Geest, T.; Larter, R. J. Phys. Chem. 1993, 97, 5649. (59) Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. Numerical Recipes: The Art of Scientific Computing, 2nd ed.; Cambridge University Press: Cambridge, U.K., 1988. (60) Hindmarsh, A. C. ACM Signum Newsl. 1980, 15, 10. (61) Shampine, L. F.; Gear, C. W. SIAM ReV. 1979, 21, 1. (62) Li, T. Y.; Yorke, J. A. Am. Math. Mon. 1975, 82, 985. (63) Larter, R.; Ortoleva, P. J. Theor. Biol. 1981, 88, 599. (64) Larter, R.; Ortoleva, P. J. Theor. Biol. 1982, 96, 175. (65) Turing, A. Philos. Trans. R. Soc. 1952, 237, 32. (66) Jaffe, L. F. Control of development by ionic currents. In Membrane Transduction Mechanisms; Cone, R. A., Dowling, J. E., Eds.; Raven Press: New York, 1979; Vol. 33, p 199. (67) Scott, B. I. H. Ann. N. Y. Acad. Sci. 1962, 98, 890. (68) Bodemer, C. W. Anat. Rec. 1964, 148, 441. (69) Borgens, R. B.; Vanable, J. W.; Jaffe, L. F. Proc. Natl. Acad. Sci. U.S.A. 1977, 74, 4528. (70) Libbin, R. M.; Person, P.; Papierman, S.; Shah, P.; Nevid, D.; Grob, H. J. Morphol. 1979, 159, 427. (71) Person, P.; Libbin, R. M.; Shah, D.; Papierman, S. J. Morphol. 1979, 159, 427. (72) Smith, S. D. Anat. Rec. 1967, 158, 89. (73) Smith, S. D. Ann. N. Y. Acad. Sci. 1974, 238, 500. (74) Illingworth, C. M. J. Pediatr. Surg. 1974, 9, 853. (75) Douglas, B. S. Aust. Paediatr. J. 1972, 8, 86. (76) Lavine, L. S.; Lustrin, I.; Shamos, M. H.; Rinaldi, R. A.; Liboff, A. R. Science 1972, 175, 1118. (77) Larter, R. J. Membr. Sci. 1986, 28, 165. (78) Kuntz, W. H.; Larter, R.; Uhegbu, C. E. J. Am. Chem. Soc. 1987, 109, 2582. (79) Steinmetz, C. G.; Larter, R. J. Phys. Chem. 1988, 92, 6113.

Feature Article (80) Lakshminarayanaiah, N. Transport Phenomena in Membranes; Academic Press: New York, 1969. (81) Goldman, D. E. J. Gen. Physiol. 1943, 26, 37. (82) Kim, J. T.; Larter, R. J. Phys. Chem. 1991, 95, 7948. (83) Shen, P.; Kim, J. T.; Larter, R.; Lipkowitz, K. J. Phys. Chem. 1993, 97, 1571. (84) Yoshikawa, K.; Sakabe, K.; Matsubara, Y.; Ota, T. Biophys. Chem. 1984, 20, 107. (85) Raudino, A.; Larter, R. J. Chem. Phys. 1993, 98, 3422. (86) Fromherz, P.; Zimmerman, W. Phys. ReV. E 1995, 51, 1659. (87) Hsu, J. P.; Ting, K. C.; Shieh, Y. H. J. Phys. Chem. 2000, 104, 3492. (88) Leonetti, M.; Dubois-Violette, E. Phys. ReV. E 1997, 6, 4521. (89) Leonetti, M.; Renversez, G.; Dubois-Violette, E. Europhys. Lett. 1999, 46, 107. (90) Raudino, A. AdV. Colloid Interface Sci. 1995, 57, 229. (91) Sokirko, A. V.; Manzanares, J. A.; Pellicer, J. J. Colloid Interface Sci. 1994, 168, 32. (92) Atwater, I.; Sherman, A. Biophys. J. 1993, 65, 565. (93) Epstein, I. R. Science 1991, 252, 67. (94) Jaffe, L. F. Proc. Natl. Acad. Sci., U.S.A. 1991, 88, 9883. (95) Jaffe, L. F. Cell Calcium 1993, 14, 736. (96) Araque, A.; Sanzgiri, R. P.; Parpura, V.; Haydon, P. G. J. Neurosci. 1998, 18, 6822. (97) Chakravarthy, B.; Morley, P.; Whitfield, J. Trends Neurosci. 1999, 22, 12. (98) Charles, A. C.; Dirksen, E. R.; Merrill, J. E.; Sanderson, M. J. Glia 1993, 7, 134. (99) Putney, J. W., Jr. Calcium signaling: Up, down, up, down....What is the point? Science 1998, 279, 191. (100) Malenka, R. C.; Nicoll, R. A. Science 1999, 285, 1870. (101) Malenka, R. C.; Nicoll, R. A. Trends Neurosci. 1993, 16, 521. (102) Soderling, T. R.; Derkach, V. A. Trends Neurosci. 2000, 23, 75. (103) Kandel, E. R.; Schwartz, J. H.; Jessell, T. M. Principles of Neural Science, fourth ed.; McGraw-Hill: New York, 2000. (104) Dichter, M. Overview: the neurobiology of epilepsy. In Epilepsy: a comprehensiVe textbook; Engel, J., Pedley, T. A., Eds.; LippincottRaven: Philadelphia, PA, 1998. (105) Sloviter, R. S. Ann. Neurol. 1994, 35, 640. (106) Sutula, T. P. Epilepsia 1990, 31, S45. (107) Wyllie, E. The treatment of epilepsy: principles and practice; Lea &Febiger: Philadelphia, PA, 1993. (108) Salanova, V.; Markand, O.; Worth, R. M.; Smith, R.; Wellman, H.; Hutchins, G.; Park, H.; Ghetti, B.; Azzarelli, B. Acta Neurol. Scand. 1998, 97, 146. (109) Larter, R.; Speelman, B.; Worth, R. M. Chaos 1999, 9, 795. (110) Sloviter, R. S. Science 1987, 235, 73. (111) Hodgkin, A. L.; Huxley, A. F. J. Physiol. 1952, 117, 500. (112) Hodgkin, A. L.; Huxley, A. F. J. Physiol. 1952, 116, 473. (113) Morris, C.; Lecar, H. Biophys. J. 1981, 35, 193. (114) Speelman, B. A dynamical systems approach to the modeling of epileptic seizures. Ph.D. Thesis, Indiana University-Purdue University at Indianapolis, Indianapolis, IN, 1997. (115) Herreras, O.; Largo, C.; Ibarz, J. M.; Somjen, G. G.; del Rio, R. M. J. Neurosci. 1994, 14, 7087. (116) Ichijo, M.; Ochs, S. Brain Res. 1970, 23, 41. (117) Leibowitz, D. H. Proc. R. Soc. London, Ser. B 1992, 250, 287. (118) Somjen, G. G.; Aitken, P. G.; Czeh, O.; Herreras, O.; Jing, J.; Young, J. N. Can. J. Physiol. Pharmacol. 1992, 70, S248. (119) Physics Update: A New Model of Epilepsy. Phys. Today 1999, 52, 9. (120) Just Slow Down: There’s Trouble in Store when Neurons get Too Speedy. New Sci. 1999, 163, 16. (121) Better Living Through Chaos. The Economist 1999, 352, 89. (122) Top Three Medical Physics Stories of 1999: A New Model of Epilepsy. Am. Phys. Soc. News 2000, 9 (4), 4. (123) Cornell-Bell, A. H.; Finkbeiner, S. M. Cell Calcium 1991, 12, 185. (124) Dani, J. W.; Chernjavsky, A.; Smith, S. J. Neuron 1992, 8, 429. (125) Neuroglia; Kettenmann, H., Ransom, B. R., Eds.; Oxford University Press: Oxford, U.K., 1995; p 1079. (126) Araque, A.; Parpura, V.; Sanzgiri, R. P.; Haydon, P. G. Trends Neurosci. 1999, 22, 208. (127) Araque, A.; Carmignoto, G.; Haydon, P. Annu. ReV. Physiol. 2001, 63, 795. (128) Attwell, D. Glia and neurons in dialogue. Nature 1994, 369, 707. (129) Nakanishi, K.; Okouchi, Y.; Ueki, T.; Asai, K.; Isobe, I.; Eksioglu, Y. Z.; Kato, T.; Hasegawa, H.; Kuroda, Y. Brain Res. 1994, 659, 169.

J. Phys. Chem. B, Vol. 107, No. 2, 2003 429 (130) Nedergaard, M. Science 1994, 263, 1768. (131) Newman, E. A.; Zahs, K. R. J. Neurosci. 1998, 18, 4022. (132) Parpura, V.; Basarsky, T. A.; Liu, F.; Jeftinija, K.; Jeftinija, S.; Haydon, P. G. Nature 1994, 369, 744. (133) Pasti, L.; Pozzan, T.; Carmignoto, G. J. Biol. Chem. 1995, 270, 15203. (134) Cornell-Bell, A. H.; Finkbeiner, S. M.; Cooper, M. S.; Smith, S. J. Science 1990, 247, 470. (135) Araque, A.; Parpura, V.; Sanzgiri, R. P.; Haydon, P. G. Eur. J. Neurosci. 1998, 10, 2129. (136) Bezzi, P.; Carmignoto, G.; Pasti, L.; Vesce, S.; Rossi, D.; Rizzini, B. L.; Pozzan, T.; Volterra, A. Nature 1998, 391, 281. (137) Hassinger, T. D.; Atkinson, P. B.; Strecker, G. J.; Whalen, L. R.; Dudek, F. E.; Kossel, A. H.; Kater, S. B. J. Neurobiol. 1995, 28, 159. (138) Kimelberg, H. K.; Goderie, S. K.; Higman, S.; Pang, S.; Waniewski, R. A. J. Neurosci. 1990, 10, 1583. (139) Parpura, V.; Liu, F.; Brethorst, S.; Jeftinija, K.; Jeftinija, S.; Haydon, P. G. FEBS Lett. 1995, 360, 266. (140) Szatkowski, M.; Barbour, B.; Attwell, D. Nature 1990, 348, 443. (141) Ye, Z.-C.; Sontheimer, H. Glia 1999, 25, 270. (142) Vernadakis, A. Prog. Neurobiol. 1996, 49, 185. (143) Glendening, M. A new model for calcium waves in astrocytes: glutamate-induced-glutamate-release. M.S. Thesis, Indiana UniversityPurdue University at Indianapolis, Indianapolis, IN, 2001. (144) Larter, R.; Glendening, M.; Tinsley, R. , manuscript in preparation. (145) De Koninck, P.; Schulman, H. Science 1998, 279, 227. (146) Dosemeci, A.; Albers, R. W. Biophys. J. 1996, 70, 2493. (147) Larter, R. J. Phys. Chem. 1983, 87, 3114. (148) Larter, R. The use of sensitivity analysis in determining the structural stability of multi-parameter oscillators. In Chemical Applications of Topology and Graph Theory: a collection of papers from a symposium held at the UniVersity of Georgia, Athens, Georgia, 18-22 April 1983; King, R. B., Ed.; Studies in physical and theoretical chemistry, Vol. 28; Elsevier, Amsterdam, 1983. (149) Larter, R.; Rabitz, H.; Kramer, M. J. Chem. Phys. 1984, 80, 4120. (150) Larter, R. Sensitivity Analysis: A numerical tool for the study of parameter variations in oscillating reaction models. In Chemical Instabilities; Nicolis, G., Baras, F., Eds.; D. Reidel: Amsterdam, 1984; p 59. (151) Larter, R. J. Chem. Phys. 1986, 85, 7127. (152) Larter, R.; Clarke, B. L. J. Chem. Phys. 1985, 83, 108. (153) Feigenbaum, M. J. Los Alamos Sci. 1980, 1, 4. (154) May, R. M. Nature 1976, 261, 459. (155) Larter, R.; Hemkin, S. J. Phys. Chem. 1996, 100, 18924. (156) Larter, R.; Steinmetz, C. G. Philos. Trans. R. Soc. London, Ser. A 1991, 337, 291. (157) Albahadily, F. N.; Ringland, J.; Schell, M. J. Chem. Phys. 1989, 90, 813. (158) Barkley, D. J. Chem. Phys. 1988, 89, 5547. (159) Maselko, J.; Swinney, H. L. J. Chem. Phys. 1986, 85, 6430. (160) Jensen, M. H.; Bak, P.; Bohr, T. Phys. ReV. Lett. 1983, 50, 1637. (161) Larter, R.; Bush, C. L.; Lonis, T. R.; Aguda, B. D. J. Chem. Phys. 1987, 87, 5765. (162) Doedel, E. AUTO: Software for continuation and bifurcation problems in ordinary differential equations; CalTech: Pasadena, CA, 1986. (163) Larter, R.; Showalter, K. Computational Studies in Nonlinear Dynamics. In ReViews in Computational Chemistry; Lipkowitz, K. B., Boyd, D. B., Eds.; VCH Press: New York, 1996; Vol. 10, p. 177. (164) Steinmetz, C. G.; Larter, R. J. Chem. Phys. 1991, 94, 1388. (165) Argoul, F.; Arneodo, A.; Richetti, P.; Roux, J. C. J. Chem. Phys. 1987, 86, 3325. (166) Barkley, D.; Ringland, J.; Turner, J. S. J. Chem. Phys. 1987, 87, 3812. (167) Bassett, M. R.; Hudson, J. L. Physica D 1989, 35, 289. (168) Richetti, P.; Roux, J. C.; Argoul, F.; Arneodo, A. J. Chem. Phys. 1987, 86, 3339. (169) Orba´n, M.; Epstein, I. R. J. Phys. Chem. 1982, 86, 3907. (170) Glansdorff, P.; Prigogine, I. Thermodynamics of Structure, Stability and Fluctuations; Wiley-Interscience: New York, 1971. (171) Nicolis, G.; Prigogine, I. Self-Organization in Nonequilibrium Systems; Wiley: New York, 1977. (172) Kauffman, S. A. The origins of order: self-organization and selection in eVolution; Oxford University Press: New York, 1993. (173) Waldrop, M. M. Complexity: The emerging science at the edge of order and chaos; Simon & Schuster: New York, 1992. (174) Schro¨dinger, E. What is Life?; University Press, Cambridge, U.K., 1945.