An Evaluation of the Usual Simplifying Assumptions - ACS Publications

Sep 27, 2012 - instances, ones that are unique to chemical engineering, have helped it to flourish ...... solved numerically by “marching” in time...
0 downloads 0 Views 606KB Size
Article pubs.acs.org/IECR

An Evaluation of the Usual Simplifying Assumptions Stuart W. Churchill* Department of Chemical and Biomolecular Engineering, University of Pennsylvania, 311 A Towne Bldg., 220 South 33rd St., Philadelphia, Pennsylvania 19104, United States ABSTRACT: Some of the simplifying assumptions that underlie the characteristic concepts of chemical engineering are identified and their impact is examined. Many of them are found to be obsolete. They remain in textbooks and perhaps in computer packages out of inertia and, in some instances, out of misdirected respect for those who conceived them. It is concluded that students, teachers, and industrial practitioners should question the continued validity of simplifying assumptions and of the viability of the concepts and expressions that incorporate them.

1. INTRODUCTION The formulation and adoption of useful concepts, and, in many instances, ones that are unique to chemical engineering, have helped it to flourish as an academic subject and as a profession. The broadest and most notable concepts are illustrated by the unit operations, the unit processes, transport phenomena, the rate processes, and more-specific ones by the equilibrium stage, the perfectly mixed reactor, the Hougen and Watson models, the Ergun equation, and the Colburn j-factor. These concepts have resulted from observations in plant operations and in the laboratory as well as from theoretical analyses and flashes of insight. All in all, they constitute an essential element of both education and practice in chemical engineering. In most instances, the formulation and generalization of each of these concepts have been dependent upon one or more ingenious idealizations or simplifications. Over the course of time, the idealizations and simplifications that are implicit in a particular concept often become lumped together and known as the “usual simplifying assumptions”. They are thereafter accepted without much question by students, teachers, and industrial practitioners, even though advances in analysis or in computer hardware and software may now allow their elimination or provide the basis for their replacement. The quantitative error arising from these idealizations and simplifications, if recognized, is often compensated for or partially compensated for in industrial practice by a “correction factor” or an “efficiency”. That expedient may be acceptable in practice in the short run, but the possibility of its elimination or improvement by improved modeling should always be kept in mind. A fudge factor or efficiency is never an acceptable substitute in textbooks or the classroom for an understanding of the cause of inaccurate predictions. Newton’s fourth rule of scientific reasoning, which has stood the test of time, can be expressed as “Propositions collected from observation of phenomena should be viewed as accurate or very nearly so until contradicted by other phenomena”. This rule not only identifies the source of new concepts and their utility, even if approximate, but also notes that they should be abandoned if and when proven false. The criticality of a simplifying assumption is demonstrated decisively by the history of the attempts to predict the speed of sound in a gas. In deriving an expression for its prediction, early © 2012 American Chemical Society

scientists (including Newton) made the seemingly reasonable simplifying assumption of an isothermal process and obtained ⎛ dp ⎞1/2 ⎛ RT ⎞1/2 ⎟ ua = ⎜ ⎟ ≅ ⎜ ⎝M⎠ ⎝ dρ ⎠T

(1)

whereas the correct simplifying assumption is an isotropic process, which leads to ⎛ dp ⎞1/2 ⎛ γRT ⎞1/2 ⎟ ua = ⎜ ⎟ ≅ ⎜ ⎝ M ⎠ ⎝ dρ ⎠ S

(2)

where γ is the heat capacity ratio. As an aside, Newton cleverly fudged his experimental data, which actually agree closely with the yet unknown eq 2, to provide support for the erroneous isothermal expression. Fortunately, he disclosed the false reasoning behind his fudging. Because of the ubiquitous inertia in academia and in industrial practice, the discovery that an idealization or a simplification can be eliminated and that a concept can thereby be improved or replaced by a better one often does not immediately become common knowledge or precipitate corrective action. In all of the arts, sciences, and technologies, the practitioners, including teachers, are prone to resist change and to cling to familiar concepts long after they have become outworn or superseded. Anderson1 has noted that textbooks and handbooks in chemical engineering ordinarily have a longer shelf life between revisions than in other fields of engineering, and even upon revision may not be brought up to date in every respect. One easily implemented expedient is for a teacher to call attention to errors and out-dated concepts and idealizations in textbook that they assign, and to issue brief corrections and updates in print or electronically. That procedure is often resisted by teachers, because it imposes a burden to relearn and replace familiar concepts with which they are comfortable by new and unfamiliar ones that may stretch their mathematical Special Issue: L. T. Fan Festschrift Received: Revised: Accepted: Published: 230

March 23, 2012 September 27, 2012 September 27, 2012 September 27, 2012 dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

ones. When I was fresh out of college and working in a petroleum refinery, I observed that operators, who were highschool graduates, had great skill in bringing a fractionating column on-line and zeroing in on the optimal separation, even though their concept of distillation was typified by the phrase “the reflux knocks back the heavies”. On the other hand, I realized that the engineer in charge understood what was going on inside that “black box” and was thereby able to choose the optimal reflux ratio and feed tray with little or no trial and error. Before turning to several of the specific topics that comprise chemical engineering and its practice, such as thermodynamics, separations, fluid flow, heat transfer, and reactor design, attention is focused on general techniques such as dimensional analysis and correlation that directly evoke simplifying assumptions.

and computational skills. Students should be pleased to be brought to the frontier of their field by such an action by a teacher but they are often irritated rather than pleased by the revelation that they have been asked to acquire and depend upon a textbook that is out of date or even outright wrong in some sections. New concepts that originate in academia may come to the attention of industrial practitioners only through new recruits or by slipping in unnoticed in updated computer packages. If and when corrective action in the form of replacement or updating of a concept is undertaken, the result is often quite a surprise and sometimes a very beneficial one by virtue insights or discoveries. Outdated concepts are relatively easy to identify in textbooks and handbooks but not so readily in computer packages. Advances in theory and in improvements in computer hardware, software, and algorithms are not the only ones that permit or warrant the elimination of idealizations that were once thought necessary or acceptable, but attention herein will focus on these two. Concepts may serve a useful role by themselves in terms of understanding. On the other hand, if they are to be applied in a quantitative sense, they may need to be supplemented by graphical, tabular, or algebraic correlations. These supplementary correlations are also subject to erroneous idealizations and to obsolescence. The objective of this manuscript is to identify some of the idealizations and simplifications that are no longer necessary and some of the accordingly outdated concepts, practices, and correlations, and insofar as possible, suggest suitable replacements. The process of identification herein is carried out most conveniently in terms of specific examples, although some general principles emerge. Some concepts that have been dismissed prematurely are also identified as well as some idealizations and simplifications that are now classified as false or obsolete but have historical significance in that they led to valid concepts that might not otherwise have been discovered. It is obviously not feasible to discuss the idealizations and simplifications in all aspects of chemical engineering, and the chosen topics are only illustrative. Preference is given to those that are well-known by all chemical engineers and thereby require minimal description. The specific illustrations are preferentially drawn from those limited areas of chemical engineering in which I have some experience, and a number include my own work because of first-hand familiarity with the details. An identification of idealizations and simplifications and an evaluation of their validity should be included in every publication of research findings, but such identifications are often given short shrift and minimal attention relative to the new findings. Authors of books in chemical engineering should accept the responsibility of identification and evaluation of idealizations and simplifications so the unjustified ones do not continue to be taught and used unwittingly. One of the reviewers of this article raised two worthy questions. If an expression produces acceptable predictions why should you care about the details and why should you tamper with it? The answer is that new methodologies and more exact expressions may produce significantly improved designs and greater production, and that an understanding of the shortcomings of present methodologies and expressions is a necessary prelude to the choice and substitution of improved

2. DIMENSIONAL ANALYSIS Rayleigh,2 in 1915, began his definitive publication on dimensional analysis with the following statement: “I have often been impressed by the scanty attention paid even by original workers in physics to the great principle of similitude. It happens not infrequently that results in the form of ‘laws’ are put forward as novelties on the basis of elaborate experiments, which might have been predicted a priori after a few minutes consideration.” The phrase “a few minutes consideration” may be a fair description for Rayleigh but constitutes an exaggeration of the capabilities of we ordinary mortals. By and large, chemical engineers have heeded Rayleigh’s advice and applied dimensional analysis. In particular, the solutions, correlative equations, and graphical correlations for transport are almost always expressed in terms of dimensionless groupings of variables with dimensions. It might appear unnecessary to review this topic, which most chemical engineers presume they understand, but my experience indicates that such presumed understanding is often incomplete and/or faulty. A brief exposition of some of the false simplifications and assumptions associated with dimensional analysis follows. 2.1. Dimensional Analysis of a List of Variables. Rayleigh in the aforementioned publication got dimensional analysis, as applied to a listing of variables, exactly right. It is my personal opinion that we would be better off if all subsequent contributions were simply ignored (except possibly as bad examples). He started by postulating an expression in the form of a power series of a term composed of the product of powers of all of the primitive variables. He then focused his attention on only one of these terms and determined these powers insofar as they are constrained by the conservation of dimensions, including time. A typical solution might be Nu = ARe nPr m + BRe 2nPr 2m + ...

(3)

Rayleigh realized that this result does not mean that the Nusselt number (Nu) is proportional to the product of a power of the Reynolds number (Re) and a power of the Prandtl number (Pr), such as

Nu = ARe nPr m

(4)

Rather, it merely indicates that Nu is some unknown arbitrary function of Re and Pr only, namely,

Nu = φ{Re , Pr } 231

(5)

dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

straightforward dimensional analysis, the following result for the velocity distribution in fully developed turbulent flow in a round tube:

Unfortunately, many chemical engineers, including academics, fail to make this distinction and then compound the error by plotting experimental data for Nu vs Re and/or Pr on log−log coordinates to determine values of A, n, and m. I confess to wasting a significant amount of time in my younger days trying to rationalize the prevalent value of n = 0.8 as the power of Re in expressions with the form of eq 4 as a theoretically based value of 4/5. I eventually realized that powers of dimensionless groups other than plus and minus unity only occur in asymptotes, and that values such as 0.8 are rounded-off artifacts of the choice of some particular range of the variable (here, Re) and have no theoretical significance. I have yet to discover any exceptions to that conclusion but also have yet to find or derive a formal proof or disproof. Accordingly, pending the discovery of an exception or the derivation of a disproof, I propose the total avoidance and elimination of correlating equations in the form of empirical power functions and, of course, of their products, from manuscripts, journals, and new or revised textbooks. That false concept, as represented by eq 4, is legacy of an analysis by Nusselt3 in 1909, in the very investigation that defined the dimensionless group named in his honor. It is a candidate for the most pernicious concept in the history of heat transfer. Rayleigh knew better at the time. The methodology of Rayleigh includes one, often-ignored, special proviso. If two variables occur only as a product, such as, for example, wcp in the model for a heat exchanger, they are to be treated as a single variable. 2.2. The Minimal Set of Dimensionless Variables. Hellums and Churchill4 devised a methodology that identifies the minimal set of dimensionless variables that are required to describe the behavior represented by a mathematical model consisting of one or more differential and/or algebraic equations and the associated initial conditions (if the behavior is time-dependent) and boundary conditions. If a similarity transformation is possible, this methodology identifies it. The procedure is somewhat tedious but so straightforward that White and Churchill5 wrote a now outdated computer program for its complete execution, including the reduction of a partial differential to an ordinary one in the event of the identification of a similarity transformation. The speculative elimination of each questionable variable and/or term should always be tested when using the method of Hellums and Churchill. 2.3. Speculative Dimensionless Analysis. Of course, the methodology of Rayleigh cannot correct a wrong, incomplete, or redundant listing of variables. This is a classical illustration of “garbage in, garbage out”. However, the very real difficulty in choosing variables can be turned to an advantage. Churchill6 suggested that the process of dimensional analysis be considered to be a speculation and that the process be repeated with questionable variables added or deleted one at a time or even two at a time, with or without a rationale. The results are then compared with experimental data or numerically computed values to identify the dimensionless groups and asymptotes that provide the best representation. This repetitious process requires a significant amount of effort but the reward may be the elimination of an unnecessary parameter, and that is a huge advance. Dimensional analysis with alternative variables of equal validity may lead to results of different utility. As an example, Prandtl,7 who pioneered throughout his career in the use of speculative dimensional analysis and in particular in turbulent flow, in obtained in 1926, by

⎧ y(τ ρ)1/2 a(τ ρ) 1/2 ⎫ ⎛ ρ ⎞1/2 w w ⎬ , u⎜ ⎟ = φ ⎨ ⎪ ⎪ μ μ ⎝ τw ⎠ ⎩ ⎭ ⎪



(6)

He then expressed eq 6 in a new, compact notation that has remained the standard to this day, namely, u+ = φ{y+ , a+}

(7)

He thereupon speculated that velocity distribution near the wall might not be dependent significantly on the radius of the tube. Eliminating the dimensionless group that includes the variable a produces what is now known as the “universal law of the wall”:

u+ = φ{y+ }

(8)

The validity of eq 8 as an asymptote for y → 0 and as a good approximation for much of the cross-section in a round tube has been amply confirmed by experimental measurements. The elimination of a+ as a variable in the region near the wall has become a “usual simplifying assumption”, and one that has retained its viability to the present day. The term “universal” signifies the subsequent observation that eq 8 applies to unconfined as well as confined flow and to most if not all geometrical configurations The derivation of eq 8 illustrates the criticality of the choice between alternative variables in the process of speculative dimensional analysis. Had Prandtl chosen P or even −dP/dx, which are valid alternatives to τw, as the dependent variable, he could not have derived this law by this process. My guess is that he tested many variables and combinations thereof before arriving at the ultimate productive choice. With this success in hand, Prandtl logically turned his attention to the other extremethe region near the centerlineand speculated that the velocity gradient there might be dependent primarily on the turbulence and negligibly on the viscosity. The consequent routine dimensional analysis for du/ dy without the viscosity as a variable led to +

⎧ y+ ⎫ du + ⎨ +⎬ + = φ dy ⎩a ⎭

(9)

and the formal indefinite integration of eq 9 from the centerline to an arbitrary nearby location led to uc+ − u+ = ψ {1} − ψ {y+ /a+} = ξ{y+ /a+}

(10)

which is known as the “universal law of the center”. The term u+c − u+ is known as the “velocity defect”. The neglect of the viscosity as a variable near the center of a round tube is now recognized as a usual simplifying assumption, but it should not be overlooked that it is a valid one only in terms of du+/dy+ and only for confined flows with symmetry. Millikan,8 a few years later in 1938, noted that both the “law of the wall” and the “law of the center” were reasonable approximations for experimental data in the region intermediate to the wall and the centerline, and speculated that there might be some region of y/a in which their predictions were essentially equal. With great mathematical insight, he recognized that only one expression conformed functionally to that requirement, namely u+ = A + B ln{y+ } 232

(11)

dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

often is a repository of idealizations and simplifying assumptions. Although some generic guidelines exist for evaluation of the validity and applicability of particular asymptotes, most of the choices involve details that arise from the nuances of physical behavior rather from mathematical or structural considerations. The general constraints include the following:

and its counterpart uc+ − u+ = B ln{a+/y+ }

(12)

Equations 11 and 12 are known as the laws of the turbulent core. The importance of eq 7, in another respect, should not be overlooked, because of the developments relative to the velocity distribution that followed from it, namely, eqs 8−12. It can be inferred from eq 7 that integration of the velocity over the cross-section results in um+ = (2/f )1/2 = φ{a+}

u+m

(1) The asymptotes must both be free of singularities. (2) The asymptotes must both be upper bounds or must both be lower bounds. (3) The asymptotes must intersect once and only once. (4) Limiting values of zero and infinity are not directly applicable as asymptotes. Flexibility often exists in the choice among asymptotes that meet these requirements. Correlations in the form of the CUE are becoming commonplace, if not yet the norm, in fluid mechanics and convective heat transfer. However, the applicability of the CUE is not limited to those two topics. Correlations in this form have been devised for applications as diverse as the pressure drop in flow through a packed bed, binary vapor−liquid equilibrium, the rate of enzymatic reactions, the rate of a human running on a track, and the approximate representation of mathematical functions such as erfc{x}. Two asides are irresistible if not essential to the objective herein. First, the algebraic representation of “Fermat’s last theorem” is a special case of eq 14, and second the Danish polymath, Piet Hein, made a career out its applications to architecture and design (see Gardner11). The exponent p in eq 14 was conceived to be arbitrary and, as already noted, a protocol was devised for its evaluation. However, the predictions of eq 14 are so insensitive to the numerical value of p that an integer or the ratio of two integers is ordinarily chosen for convenience. In a few instances, the combining exponent has been found to correspond to a theoretical solution. In most of those, such as the Ergun (or Forchheimer) equation (see Figures 3 and 4 in ref 10) or Ohm’s law, the theoretically determined combining exponent is 1 or −1. As an aside and an example of a misidentified simplifying assumption, the two terms (asymptotes) of which the Ergun equation is comprised are often cited as representing laminar and turbulent flow but the latter one actually represents inertial flow. The one notable exception, with regard to a theoretically derived combining exponent other than 1 or −1, is that for assisting forced and free convection. A value of 3 was determined from experimental data (see Figures 2 and 3 in 12 ) and subsequently discovered to correspond to theoretical solutions first derived by Kitaura and Tanaka13 and then independently by Ruckenstein.14 The solution of Kitaura and Tanaka was subsequently generalized for multiple mechanisms of assisting convection (for example, translation, rotation, and vibration). The utilization of eq 14 for the formulation of correlating equations constitutes an inconspicuous revolution in chemical engineering in that it is gradually replacing power functions, which, as noted, are always in functional error when applied over a range of an independent variable or parameter. Expressions in the form of eq 14 are approximations by virtue of interpolation and empirical by virtue of the arbitrary combining exponent (the particular power-mean). On the other hand, their predictions are not only insensitive to the

(13) +

The dependence of on a is perhaps the most important member of the usual simplifying assumptions for turbulent flow in a smooth round tube. Equation 13, does not preclude relationships in the form of f = φ{Re} because Re = 2a+u+m, but the latter prove to be implicit and require iterative solution whereas those in the form of eq 13 do not.

3. CORRELATION Correlations provide an essential resource for the design of chemical plants and the analysis of chemical processing. Fame and credit often accrue to those who devise useful correlations rather than to those who obtain the experimental data upon which they are based. From the earliest days of chemical engineering, these correlations have taken the form of graphical representations or of empirical algebraic equations representing straight lines or simple curves. That practice has been utilized for both physical−chemical properties and for the rates of the individual processes. Although theoretical concepts have yet to replace correlations on a broad scale, they have had an everincreasing role in chemical engineering in two respects. First, by providing a substitute for experimental data in the form of values obtained by means of the numerical solution of a theoretical model, and second by providing guidance for the construction of correlating equations. The first role is outside the scope of this manuscript, but several examples of the second role follow. 3.1. The Churchill−Usagi Equation (CUE). In 1972, Churchill and Usagi9 proposed the following canonical equation for correlation: y{x} = [y0 {x} p + y∞{x} p]1/ p

(14)

In 1974, they explored its extended usage for more than one independent variable and/or three or more regimes.10 Equation 14 may be noted to constitute the pth power-mean of the two limiting asymptotes, y0{x} and y∞{x} for vanishingly small and unlimited large values of x, respectively, and thereby to interpolate between them. This means of correlation was utilized earlier by others; our contribution was the recognition of its potential generality and the codification of its construction, including a methodology for the choice of a numerical value for the arbitrary exponent. The choice of p as the symbol for the combining asymptote is a deliberate attempt to avoid confusion with n, which is commonly used to symbolize powers of particular variables or dimensionless groups of variables. The asymptotes are the critical element of the CUE. They may either or both incorporate a theoretically based dependence on parameters and secondary variables, as well as being based on the primary independent variable. Their selection provides the opportunity, if not the necessity, for ingenuity, and 233

dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

For the flow of an idealized fluid of constant density and constant heat capacity through a round tube, it is defined as the integral of the radial temperature distribution weighted by the radial velocity distribution; thus,

numerical value of that exponent but, in most instances, remarkably accurate.

4. CONCEPTUAL CONCEPTS AND VARIABLES The use of conceptual concepts and variables in chemical engineering is so pervasive that it may be forgotten that they are arbitrary and subject to limitationsthe usual simplifying assumptions. Several examples follow. 4.1. The Heat-Transfer Coefficient. The heat-transfer coefficient is perhaps the best example of a conceptual variable. Newton,15 in 1704, noted that, in free convection from a heated immersed body (a horizontal cylinder), the heat flux is proportional to the surface area and to the difference in temperature between the body and the surrounding air. This is perhaps the greatest single conceptual advance in the history of heat transfer and related subjects in that it allows a reduction in the number of quantities required to describe the process from four to one. That one remaining compound variable is now known as the heat-transfer coefficient, and, in algebraic notation, Newton’s concept may be expressed as h = Q /A(Ts − T∞)

Tm ≡

∫0

1

⎛ r ⎞2 ur d⎜ ⎟ ⎝a⎠

⎛ ur ⎞ ⎛ r ⎞2 Tr ⎜ ⎟ d⎜ ⎟ ⎝ um ⎠ ⎝ a ⎠

(17)

This quantity can, in principle, be determined by means of a direct but disruptive experiment, namely, diverting the entire stream through a stirred vessel and measuring its exiting temperature, thus leading to the now-obsolete alternative name: the mixing-cup temperature. The mixed-mean concentration of the species A in a fluid stream of constant density can similarly be expressed as CAm ≡

∫0

1

⎛ ur ⎞ ⎛ r ⎞2 CAr ⎜ ⎟ d⎜ ⎟ ⎝ um ⎠ ⎝ a ⎠

(18)

These three mixed-mean quantities are well-known, but it is important to note the usual simplifying assumptions in their definition. They may also be defined and applied for variable physical properties. 4.3. The Equivalent Thickness for Pure Conduction. This quantity, as defined by δe = h/k = jw/kΔT, has repeatedly been proposed as an alternative to the heat-transfer coefficient, but, with one notable exception; it has always proven to be inferior in terms of correlation, generalization, and insight, and, but for that exception, might have been relegated to the ashbin of failed concepts. That exception was its use by Langmuir16 in 1912 in the context of an analysis of the heat loss by free convection from the filament of a partially evacuated electrical light bulb. He utilized an equivalent thickness for thermal conduction to derive an approximate expression for the effect of the curvature of the cylindrical filament, vis-à-vis, a vertical flat plate, on the rate of heat transfer by free convection. The resulting expression, which, after nearly a century, has not been improved upon, and which is based on the applicability of the log-mean area for conduction across a cylindrical layer, is

(15)

The attribution of the concept to Newton has been disputed by some because eq 15 and the coefficient h itself are later expositions. This concept, although usually encompassing some degree of approximation and scorned by a few iconoclasts, has remained in active use for over 300 years. The few outright failures are generally a consequence of the invalidity of the simplifying assumptions in a particular application rather than in general. One such failure is illustrated in Section 10.11. Eventually, the concept of a heat-transfer coefficient was adapted for forced convection in tubular flow by replacing the free-stream temperature with the mixed-mean temperature of the fluid. With the passage of time, the concept of a heattransfer coefficient has also been adopted and adapted for equivalent coefficients for flow and mass transferfor example, as the friction factor, the drag coefficient, the orifice coefficient, and the mass-transfer coefficient. These compound variables have been utilized by chemical engineers for more than one hundred years, and they remain invaluable and irreplaceable. Most of the technical database in heat transfer is compiled in terms of these coefficients. 4.2. Lumped-Parameter Models. The adaptation of the concept of a heat-transfer coefficient for heat exchange between a stream of fluid and a wall in terms of the mixed-mean temperature may have been the origin of the concept of a lumped parameter. Although models based on lumped parameters are scorned by some elitists, they are the workhorse of process design and a few of them merit identification here. The mixed-mean velocity appears throughout the literature of chemical engineering, because of the pervasive use of tubular flow in processing chemicals and petroleum. For constant density, it is defined as um ≡

∫0

1

Nu =

2

{

ln 1 +

2 Nuf

}

(19)

Here, Nuf symbolizes the correlating equation for laminar free convection from a vertical flat plate. It should be noted that the effective thickness is not present in the final expression. This relationship has also been found to be uniquely useful as an approximation for both free and forced convection from a horizontal cylinder as the Grashof and Reynolds number, respectively, approach zero, and is readily adapted for the region of the entrance in laminar tubular flow and for mass transfer. When applied to a spherical layer, this concept invokes the geometric-mean area and leads to the exact asymptote for a decreasing Grashof number or Reynolds number, namely, Nu = 2. As a consequence of eq 19, the equivalent thickness is a candidate for the most useful of simplifying assumptions. 4.4. Fully Developed Flow. Full development is invariably a postulate, whether noted or not, in theoretical expressions and correlating equations for both the laminar and turbulent regimes of flow in a round tube. A regime of development of the flow actually exists in all instances and depends critically on the geometrical configurations before and at the inlet. The

(16)

The mixed-mean velocity is equal to v/A, where v is the volumetric rate of flow and A is the cross-sectional area, and the integration is unnecessary insofar as v is known. The mixed-mean temperature has also proven to be a very useful quantity in chemical engineering for the same reason. 234

dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

approximation at best. Fortunately, this is a conservative approximation, in that the heat-transfer coefficient is highest at the point of onset of heating or cooling. Most analytical and numerical solutions for thermal convection are for fully developed convection and thereby for uniform heating or uniform wall temperature, because of the relative simplicity of the behavior. The former condition can be approximated in practice by electrical-resistance heating of the tube wall or by equal-counter-enthalpic flow, and the latter condition is accomplished by subjecting the outer surface of the tube to a boiling fluid for heating or to a condensing fluid for cooling. Constant viscosity, as well as density and heat capacity, are implied by the concept of fully developed thermal convection. One classical solution for developing convection in fully developed flow is that of Graetz18 for uniform wall temperature. It is in the form of the sum of an infinite series of terms consisting of eigen coefficients and eigen functions, but reduces to one term for fully developed convection. As an aside, numerical integration has now become more accurate and less time-consuming than utilizing the Graetz solution or its counterparts for a uniform heat flux density or fully developed turbulent flow. It is obviously desirable to recognize the existence of the simplifying assumptions identified here when applying a closedform solution, a numerical solution, or a correlating equation for convective heat transfer. 4.6. Lower Limiting Values for Convection. The lower limiting value of 2 for the Nusselt number (Nu) for free or forced convection from the outer surface of a sphere as Re or Gr approach zero is also useful as a test of experimental data; lower values are presumably in error. All other finite bodies have a greater limiting value. The exact values of 48/11 and 3.567 for Nu in fully developed convection in fully developed flow in a round with a uniform heat flux density and a uniform temperature on the wall, respectively, serve similar roles as limiting and test values. 4.7. The Log-Mean and Mixed-Log-Mean Temperature Difference. These two quantities are applicable with some constraints for countercurrent and concurrent flow in heat exchangers, as well as the correction factor for deviations from these mean temperature differences in other configurations of flow. Those constraints, although too specialized to justify description here, should not be overlooked.

pressure gradient, which is of primary concern in bulk transport, and the radial velocity distribution, which is of primary concern in forced convection and chemical conversions, vary greatly near the inlet. The postulate of full development is so ubiquitous, not because it is always an acceptable approximation, but because it results in such a great simplification in the modeling. If the density changes significantly with length, because of the pressure drop, or if the density and/or viscosity change significantly with temperature, because of heat transfer or an energetic chemical reaction, fully developed flow may not be attained before the fluid exits from the tubing. The number of tube diameters in which the velocity profile and pressure gradient differ significantly from that for full development is difficult to generalize, other than that it is much shorter in turbulent flow than in laminar flow. Whether or not the development of the flow is important should be made on a case-by-case basis. There are three possible choices: neglect the existence of the regime of development, estimate the effects of its neglect, or take the development into account rigorously. The first choice is the common one, but the second choice is the practical one, in most instances. The third choice, that is computation of the development, is discussed in the subsequent section on fluid flow. 4.5. Fully Developed Convection. The concept of fully developed convection in tubular flow is as useful and as pervasive as that of fully developed flow, but is much more subtle and its initial formulation required more ingenuity. Fully developed convection differs for fully developed or developing flow. It was belatedly generalized by Seban and Shimazaki,17 who codified its dependence on the thermal boundary condition at the wall. If a uniform heat flux density is imposed on the wall of a tube through which a fluid is passing, the mixed-mean temperature of the fluid must increase linearly with axial distance insofar as the density and heat capacity can be considered to be invariant. It follows that the temperature of the wall must thereafter also increase linearly at the same rate, and that (T − T0)/(Tm − T0), (Tw − T)/(Tw − Tm) and the heat-transfer coefficient must approach asymptotic values. Fully developed convection for uniform heating is thus defined by the near-attainment of asymptotic values for (T − T0)/(Tm − T0), (Tw − T)/(Tw − Tm), and the heat-transfer coefficient. It follows that ∂T/∂x → ∂Tw/∂x → ∂Tm/∂x → dTm/dx, which allows simplification of the differential energy balance. A uniform wall temperature also results in an approach to an asymptotic value for the heat-transfer coefficient and for (Tw − T)/(Tw − Tm) but not for (T − T0)/(Tm − T0). Seban and Shimazaki recognized that the attainment of an asymptotic value of the first of these quantities implied that its derivative can be equated to zero. They thereby formulated ∂T ⎛ T − T ⎞ ∂T = m⎜ w ⎟ ∂x ∂x ⎝ Tw − Tm ⎠

5. THERMODYNAMICS The thermodynamics that is taught in chemical engineering constitutes not only the most important element in the curriculum but also differs greatly from that taught in chemistry, physics, and the other branches of engineering. The differences with respect to that taught in chemistry and physics have arisen because chemical and petroleum processing take place primarily in steady flow through round tubes or stirred vessels, and are most expediently described in Eulerian rather than Lagrangian coordinates. The differences, with respect to that taught in the other branches of engineering, have arisen because of the involvement of chemical engineers with a broader range of materials, including gases other than air and water vapor, gases under vacuum and high pressure, liquids other than water, including non-Newtonian ones, and two-phase dispersions, including bubbles in liquids, droplets in both gases and liquids, and solids in fluidized and packed beds. As graduate students in chemical engineering, several of us selected, collectively, as a brave cultural adventure, an advanced

(20)

The two terms on the right-hand side of eq 20 are both independent of the radius, and therefore ∂T/∂x can be replaced by dT/dx. Substitution of this expression in the differential energy balance results in considerable simplification. Developing convection occurs in almost all practical applications but is often overlooked, and fully developed convection is applied for the entire length of a tube, which is an 235

dx.doi.org/10.1021/ie300773e | Ind. Eng. Chem. Res. 2013, 52, 230−257

Industrial & Engineering Chemistry Research

Article

of the limitations of the “state of the art”, and the reason for the concept of a tray efficiency. This example illustrates the usefulness of simplifying assumptions in a conceptual sense, even after they become recognized as false, but it also illustrates the desirability of identifying and acknowledging their limitations. An implicit simplifying assumption of the McCabe−-Thiele concept is that it is strictly applicable only for binary mixtures. Accordingly, caution is in order in the instance of a third component. An example is the presence of a trace of propane in the feed to a deisobutanizer. The propane accumulates in the uppermost trays and the maximum ratio of isobutane to normal butane occurs several trays below.

graduate course on thermodynamics in physics taught by David M. Dennison, a former student of Niels Bohr. One of us asked, on behalf of all, why he consistently referred to the van der Waals equation as “the real gas equation”, although it was only a mechanistic approximation. The graduate students in physics, who constituted the vast majority of the class, were openly irritated at our temerity in challenging their idol, but he took the question good naturedly and seriously, and conceded that that van der Waals equation was not exact. He went on to say that he was astounded that chemical engineers did not find it exact enough for all practical purposes. As an aside, the van der Waals equation is favored by physicists, because many of its predictions are correct qualitatively, and physicists are less concerned with quantitative predictions than are chemical engineers. The purpose of this anecdote is to demonstrate that “a usual simplifying assumption” that is acceptable in physics may not be acceptable in chemical engineering.

7. FLUID MECHANICS Chemical engineers share an interest in fluid mechanics with aerospace, mechanical, civil, environmental, petroleum, and nuclear engineers, and, to a limited extent, physicists. However, in almost all instances, the interests of chemical engineers differ from the others in terms of geometry and by virtue of the inclusion of reactions and a broader range of fluids. Hence, the illustrative topics herein are not necessarily applicable to those other fields. 7.1. Laminar Flow in a Round Tube. The concept of the friction factor, which originated at least as early as 1855 and which herein is represented by the version introduced by Fanning,20 namely f = 2τw/ρumx2. It is not, as it sometimes called by students, a “fudge factor”, but rather the dimensionless ratio of well-defined measurable quantities. For laminar flow in a round tube, the exact theoretical expression known as Poiseuille’s or Hagen’s law, namely,

6. SEPARATIONS The separation of fluids into their molecular components has largely been appropriated by chemical engineers, and thereby is chosen as the subject of the first special field herein. The McCabe-Thiele Method. Most chemical engineers first encounter the phrase “the usual simplifying assumptions” as an undergraduate student in a class in separations and in the context of the graphical method for the calculation of the required number of equilibrium stages in the partial separation of the components of a binary mixture of miscible liquids in a column with discrete trays. Each of the usual simplifying assumptions, including, but not limited to, perfect mixing of the liquid on each tray, perfect mixing of the vapor between the trays, a fixed molar rate of flow of liquid across each tray above the feed tray and another fixed rate below it, and the attainment of the equilibrium composition in both the gaseous and liquid phases leaving each tray, were necessary for the truly ingenious formulation conceived by two graduate students (see the work of McCabe and Theile19). Because of its relative simplicity, and because it provides great insight, this methodology is still taught to students and utilized by practitioners, even though all of the simplifying assumptions have been proven to be crude approximations at best. In order to retain the merits of simplicity and insight of the McCabe−Thiele methodology while recognizing its possible inaccuracy, a “fudge factor”, called the “efficiency”, was introduced. The number of ideal stages is divided by this “efficiency” to estimate the number of real trays that are required to produce the same separation. An entire literature of irrational expressions was gradually devised to estimate this efficiency. Soon after my graduation with a BSE, I was entrusted with the design and the supervision of the operation of fractionating columns of large diameter and many trays in a petroleum refinery. I was stunned when the analysis of the samples of the liquid collected from the overflow of some of the trays revealed the tray efficiency to be greater than 100%. The primary explanation was of course the variation of the composition of the liquid across the tray, which results in a greater rate of mass transfer than that corresponding to the highly idealized model. A generalized methodology to predict the behavior in a distillation column in terms of fluid mechanics and mass transfer still does not exist, primarily because of widely varying hardware and dimensions. The best solution, with respect to the classroom, appears to make sure that the students are aware

f = 16/Re

(21)

has been found to provide adequate predictions under most circumstances. The principal idealizations that were utilized in the derivation of eq 21 (the usual simplifying assumptions) are the postulates of fully developed flow, of Newton’s second law of motion (namely, that the acceleration of a mass generates a force), of Newton’s law of viscosity (namely, that a velocity gradient creates a shear stress), of invariant density and viscosity, and of a Reynolds number, Re = 2aumρ/μ, of 800 000 but within 5% for Gz > 40. It has been adapted for many other conditions, one of which is noted subsequently. The current context calls for the recognition of the many idealizations, an appreciation of their choice by Lévêque, and a recognition of the resulting limitations. The most important limitation is the requirement of a large value of Pe = Re Pr by virtue of a large value of Pr, because Re is limited in magnitude to