Reflections on rates - Industrial & Engineering Chemistry Research

Michael Neuman , Stuart W. Churchill. Industrial & Engineering Chemistry Research 2011 50 (15), 8901-8904. Abstract | Full Text HTML | PDF | PDF w/ Li...
0 downloads 0 Views 436KB Size
I n d . Eng. Chem. Res.

1992,31,641-643

64 1

Reflections on Rates Robert L. Kabel Department of Chemical Engineering, The Pennsylvania State University, University Park, Pennsylvania 16802

In 1980, on the occasion of Stuart Churchill's 60th birthday, I prepared an editorial entitled "Rates" in which the implications of Churchill's distinction between process rates and rates of change were contemplated. Now, 10 years later, it seems worthwhile to reflect on some changes in the treatment of rate processes that have occurred. A leading chemical reaction engineering textbook for undergraduate use now adheres to Churchill's concepts. The decade has seen a monumental increase in computing power and may be witnessing, for the first time, some limitations on the continuation of such increases. Industrial, multiphase reactors are being scaled up with greater technical discipline and insight. Complex mixtures are being dealt with on a more fundamental basis. Dynamic simulators are coming into being. Stuart Churchill's 70th birthday marks an exciting time for those who work with rate processes.

Ten Years Ago In 1980, on the occasion of Churchill's birthday, I prepared an editorial on the flawed use of a differential equation, such as rA -dCA/dt, as a definition of reaction rate. The problem began with the pioneering work of Wilhelmy (1850)on the kinetics of the inversion of sucrose. Churchill (1974a) made the clarifying distinction between process rates and rates of change. Kabel(1980) reviewed current textbooks, principles, and practices in the context of Churchill's crucial distinction. At last, a preeminent textbook in chemical reaction engineering (Fogler, 1986) handles the issue well. Unfortunately, a review of general and physical chemistry textbooks, in particular, shows them continuing to adhere to mistaken tradition, largely owing to their ignorance of flow systems. Transition to Today This paper, like the previous one, is a reflection on the state of rate processes today. These reflections have their origin in the transition of my research from catalytic kinetics to reactor dynamics to scaleup. This transition is coupled with a shift in perspective from the academic to the industrial and a change of focus from well-defined to ill-defined problems. The 1980s have witnessed some major developments in rate processes. There has been an enormous increase in computing power, most strikingly manifested in the phenomenon of personal computers on every desk. We see empiricism being succeeded by a more fundamental approach in the treatment of multiphase reactors and complex mixtures. Steady-state simulators are now in widespread use and dynamic simulators are on the way. In this context, what follows are observations from publications, lectures, and conversations that I have found stimulating. Limitations The problem in establishing dynamic simulators, regardless of computing power, is that the time scales of relevance in chemical processes can range from infinitesimal to infinite. Bonvin and Mellichamp (1987) addressed the importance and methods of scaling in dynamic models. Can we not expect ever-increasing computing speed to overcome all obstacles, even if only by brute force? LaRoche (1990) suggested that ever-increasing speed is illusory. While we notice the impressive strides being made in microcomputers, the ultimate limitation of the speed of light is coming into play in supercomputing. Zitney et al. (1990) point out that future increases in computing

power, and therefore major improvementa in process simulators, are more likely to come from vector and parallel processing than from optimization of today's sequentialmodular algorithms. In what follows, we examine the interrelated advances in computing capability and complexity of process analysis for a glimpse of what the future may hold. One method of solving complex problems is "lumping" which has undergone evolutionarydevelopment to become classic. To illustrate, consider the development of models for catalytic cracking a t Mobil. Weekman (1979) described the evolution of lumped models from a single liimp to many in cracking, as well as in reforming. In the mid-1960s they aggregated components of the process fluid into three lumps: (1)gas oil, (2) gasoline, and (3) dry gas and coke. The need for more chemistry led to more lumps and the 10-lump model. Roughly speaking, the 10 lumps were C1 to C4compounds and coke, C5's and up boiling below 222 "C, paraffins, naphthenes, percent of carbon among rings, and percent of carbon among substituent groups. The last four groups, all of which boil above 222 "C, were further broken down by boiling fraction with the dividing line at 342 "C. Astarita (1990) and with Ocone (Astarita and Ocone, 1988), identified paradoxes that arise from the lumping of reactions having nonlinear kinetics in complex mixtures. For example, the observed cracking kinetics of a model compound in single-component studies might be very different from the cracking kinetics of the same compound in an actual feedstock. Astarita and Ocone (1988) pointed the way to the reconciliation of such dilemmas of lumping. The key to effective lumping is identifying the critical issues by physical insight and intellectual discipline, attributes that make Astarita the best lumper with whom I have ever worked. Nevertheless,there is competition for lumping. Increasing Sophistication The competition is what I will call "discretizing" and its primary champion is Froment, along with co-workers (see Baltanas et al., 19891, who have focused their attention on the thermal and catalytic cracking and hydroisomerization of hydrocarbons. Their approach is to quantify the reactions of bonds (hydrogen abstraction and transfer, free-radical addition and decomposition, 8-scission,isomerization, etc.), keeping track of all of the resulting fragmenta, pathways, and molecules. Compared to merely following molecular or lumped species, this is a major

0888-588519212631-0641$03.00/0 0 1992 American Chemical Society

642 Ind. Eng. Chem. Res., Vol. 31, No. 3, 1992

accounting task. To begin the assessment of the potential of discretizing, consider two complex examples from current practice. For decades theoretical attention has been given to thermal runaway in systems involving one to at most a few exothermic reactions. What can happen to a single reaction can happen to many, leading to selectivity shifts in complex cases and massive quality control problems. Rigorous resolution of the temperature-composition coupling implied by such complexity is a doable but daunting challenge on the computer. In process analysis and design, our quantitative characterizations of reacting systems have been limited largely to ideal mixing states, for example plug flow and maximum mixedness. Today, however, we have sufficient understanding and computing power to quantify even small-scale details of large-scale systems in multiphase turbulent flow. Flows in slurry systems, bubble columns, and trickle beds would be good candidates for such treatment. However, the computational problem becomes immense if multiple reactions are superimposed upon the fluid mechanics. Lumping vs Discretizing So is there a basis for choosing between lumping and discretizing? In the previous examples, it appears that we can address the full complexity of many simultaneous reactions for simple states of mixing. Or we can have a complete description of the fluid mechanics if we are willing to limit our attention to just a few reactions. Recent papers by Dutta and Tarbell (1989) and Chatterjee and Tarbell (1991) offer considerable perspective on this subject by comparing coalescence-redispersion, multienvironment, and time-averaged equations of change models of turbulent reacting flows in plug flow and continuous stirred tank reactors, respectively. These authors see their turbulence closure models as an effective approach between the strongly lumped multienvironment models and the highly discretized and computationally intensive coalescence-redispersion model. Most continuum models deal with ensembles of molecules or fluid parcels, a form of lumping. One level of dealing with discrete elements is the molecular level, where Monte Carlo simulation is a natural. Discrete analysis is also possible at the bond level. It should be acknowledged that discretization and lumping are really overlapping spectra rather than simply extremes in a spectrum of approaches to the rational solution of engineering problems. The dominant perspective in this paper is from chemical reaction engineering but different emphases and interpretations would be natural for, say, practitioners of fluid mechanics. Whether we are considering molecular, continuum, lumped, or discrete models, the classical approach has been to determine all kinetic constants by experiment. In the process, we have found it necessary to work for great accuracy. When considering reactions at the bond level, however, the approach of Willems and Froment (1988a,b) has succeeded with estimated values for the constants or values taken from related cases. Curiously, sometimes cruder information on more complex cases may be sufficient, especially if the underlying basis is sound. At an AIChE meeting in the 19609,R. W. H. Sargent advocated studying simultaneous, rather than isolated, reactions for the increased understanding that would accrue. Perti and Kabel (1985), in a study of CO oxidation over a metal oxide catalyst, showed that dynamic methods yielded vastly more phenomenological and mechanistic information than steady-state measurements of greater accuracy, with significantly less experimental effort.

That dealing with bonds can lead to insight and simplification is illustrated by comments made by Churchill (1990) following the conclusion of the symposium in his honor. He pointed out that the rate of combustion was independent of feedstock in all of their research before the doctoral work of Pfefferle (1984). She was the first in his group to burn methane. The very different rate in this case led them to the realization that the rate-determining step in the previous studiea with higher hydrocarbons had been the breaking of the carbon-carbon bond, actually for the generation of sufficient free radicals to propagate the reaction (Collins, 1991). Thus, working with pure methane, which has no carbon-carbon bond, proved to be critical to their understanding of the combustion mechanism. Churchill (1974b) notes the (deceptively) greater ease of correlating integral data than the corresponding underlying differential data and emphasizes that such correlations do not verify the uniqueness or validity of a theory upon which an integral model is based. Reversing this logic indicates how the discretizing approach can succeed despite its complexity and the need to estimate constants. Working at the bond/rate (often differential) level, one faces the maximum uncertainty initially, explores residual uncertainty via parametric sensitivity, and then benefits from the diminishing uncertainty that accompanies integration. At this point the choice between the lumping and discretizing may be assessed. Obviously, for a small number of molecular species, molecular or lumped models will be more efficient to create and apply. In hydrocarbon systems, however, there are a limited number of bond reactions that may exist regardless of the number of reacting species. Thus, a break-even point (perhaps at about 100 identifiablespecies, see,for example, Froment (1991))must exist at which the investment in the consideration of bond reactions may be expected to prove advantageous. Conclusions In conclusion, a number of recent developments suggest changes in the way we approach our technical problems. Stuart Churchill’s 70th birthday marks an exciting time for those who work with rate processes. Literature Cited Astarita, G. Continuous Description of Kinetics in Complex Mixtures. Chemical Engineering Seminar; Penn State University: University Park, 1990. Astarita, G.; Ocone, R. Lumping Nonlinear Kinetics. AIChE J. 1988, 34,1299-1309. Baltanas, M. A.; Van Raemdonck, K. K.; Froment, G. F.; Mohedas, S. R. Fundamental Kinetic Modeling of Hydroisomerization and Hydrocracking on Noble-Metal-Loaded Faujasites. 1. Rate Parameters for Hydroisomerization. Ind. Eng. Chem. Res. 1989,28, 899-910. Bonvin, D.; Mellichamp, D. A. A Scaling Procedure for the Structural and Interaction Analysis of Dynamic Models. AZChE J. 1987,33,250-257. Chatterjee, A.; Tarbell, J. M. Closure Equations for Single and Multiple Reactions in a CSTR. AZChE J. 1991, 37, 277-280. Churchill, S! W.The Interpretation and Use of Rate Data: The Rate Concept; McGraw-Hill: New York, 1974a;pp 8-20, Churchill, S. W.The Interpretation and Use of Rate Data: The Rate Concept; McGraw-Hill: New York, 1974b;pp 296-310,319. Churchill, S. W.Personal communication, 1990. Collins, L. R. Personal communication, 1991. Dutta, A,; Tarbell, J. M. Closure Models for Turbulent Reacting Flows. AZChE J . 1989,35,2013-2027. Fogler, H. S. Elements of Chemical Reaction Engineering; Prentice-Hall: Englewood Cliffs, NJ, 1986,pp 2-6. Froment, G. F. Fundamental Kinetic Modeling of Complex Processes. In Chemical Reactions in Complex Mixtures: The Mobil Workshop; Sapre, A. J., Krambeck, F. J., Eds.; Van Nostrand

Ind. Eng. Chem. Res. 1992,31,643-658 Reinhold New York, 1991; Chapter 5; pp 78-85. Kabel, R. L. Rates. Chem. Eng. Commun. 1980,9, 15-17. LaRoche, R. D. Personal communication, 1990. Perti, D.; Kabel, R. L. Kinetics of CO Oxidation over Co3O4/yAl2O3 AZChE J. 1985,31, 1420-1440. Pfefferle, L. D. Stability, Ignition, and Pollutant Formation in a Plug Flow Thermally Stabilized Burner. Ph.D. Thesis, University of Pennsylvania, Philadelphia, 1984. Weekman, V. W. Lumps, Models, and Kinetics in Practice. AZChE Monogr. Ser. 1979, 75 ( l l ) ,3-29. Wilhelmy, L. Ueber das Gesetz, nach welchem die Einwirkung der Siuren auf den Rohrzucker stattfindet. Ann. Phys. Chem. 1850, 81,413-428.

643

Willems, P. A.; Froment, G. F. Kinetic Modeling of the Thermal Cracking of Hydrocarbons. 1. Calculation of Frequency Factors. Znd. Eng. Chem. Res. 1988a, 27, 1959-1966. Willems, P. A.; Froment, G. F. Kinetic Modeling of the Thermal Cracking of Hydrocarbons. 2. Calculation of Activation Energies. Znd. Eng. Chem. Res. 1988b, 27, 1966-1971. Zitney, S . E.; LaRoche, R. D.; Eades, R. A. Chemical Process Engineering on Cray Research Supercomputers. CACHE News 1990, 31, 19-24. Received for review January 28, 1991 Revised manuscript received September 10, 1991 Accepted September 23, 1991

The Role of Analysis in the Rate Processes Stuart W . Churchill Department of Chemical Engineering, university of Pennsylvania, 311A Towne Building, 220 South 33rd Street, Philadelphia, Pennsylvania 19104-6393

Numerical solutions are now possible for a much wider range of conditions than analytical (continuous) ones and are gradually supplanting them in our literature. The results obtained from numerical solutions as well as from analytical solutions in the form of series, integrals, or tabulated functions are more precise, regular, and coherent than experimental data but are always subject to some uncertainty owing to idealizations made in developing the model itself. Both types of solutions fail to provide a functional structure for correlation, thus giving rise to the tabulations, graphical correlations, and purely empirical equations to be found in our handbooks. On the other hand, asymptotic solutions which do provide such a functional structure can generally be derived not only for extreme conditions but for intermediate regimes as well. Comprehensive correlating equations of great accuracy can often be constructed from two or more asymptotic solutions with almost no empiricism. Examples are given of the determination of asymptotes by reduction of more general solutions, by direct derivation, and by dimensional and speculative analysis, and also of their combination. Examples from the existing literature are given of invalid or inapplicable asymptotes and combinations thereof as well.

A glance at the archival journals of our profession, such as this one, reveals that analysis, both continuous and discrete, has become dominant relative to experimentation. The results of these analyses appear, however, to have a fairly limited role in practice. The objective of this paper is to examine the role of analysis in the rate processes, both in general and by examples, and to suggest how theoretical methods and results might be used more effectively for correlation, prediction, and understanding. A large fraction of the efforts of the famous applied mathematicians of the period from roughly 1700 to 1950 were directed toward developing models and analytical (continuous) solutions for what are now classified as the rate processes-momentum, heat and mass transfer, and chemical conversions. This work has since been continued by engineering scientists,but in the past forty years it has been accompanied and gradually supplanted by numerical (discrete) methods based on finite-difference, finite-element, and stochastic models and approximations. The transition from continuous to discrete methods has of course been motivated by a desire to solve more complex models. It has been made possible by the continuing development and increasing availability of computer hardware and software. On the other hand, the literature of engineering practice in the rate processes, as epitomized by our standard handbooks and even some of our textbooks, is still largely based on experimental data. It consists primarily of dimensionless coefficients of transfer with wide bands of

scatter and of empirical equations representing straight lines drawn through the data in these plots. Such graphical representations and correlating equations have been relatively uninfluenced by theory beyond simple dimensional analysis for the primary reason that most theoretical solutions do not provide any guidance with respect to the functional behavior. This is just as true of analytical solutions in the form of complex functions, infinite series, or integrals as it is of numerical (discrete) solutions. Asymptotic solutions, including those that are evident from general solutions, are often an exception, but their utility for correlation has yet to be fully recognized or fully exploited. Experimental data are recognized as being subject to uncertainty and imprecision arising from incomplete definition and control of the environment in which the experiments are conducted, as well as from errors of measurement. Analytical solutions, both continuous and discrete, are subject to uncertainty arising primarily from simplifications and idealizations incorporated in the model but sometimes also to the process of solution. Asymptotic solutions are particularly uncertain with respect to their range of validity, if any. In addition, even nominally valid solutions are often misinterpreted or misapplied. Misuse of theoretical results in correlations is more harmful than their neglect since the error then becomes imbedded in our handbooks, textbooks, and dogmas. Particular attention is given herein, by way of examples, to questionable solutions and to misapplications of valid ones, not only for their

o a a a - ~ a a ~ ~ ~ ~ / ~ ~ ~ i -0o1992 ~ ~ American ~ ~ o ~ . Chemical o o / o Society