A Systematic Methodology for the Design Development and Scale-up

The paper presents a coherent methodology for the development of the design and control for new complex nonlinear chemical processes. The method uses ...
0 downloads 0 Views 299KB Size
246

Ind. Eng. Chem. Res. 2004, 43, 246-269

A Systematic Methodology for the Design Development and Scale-up of Complex Chemical Processes. The Role of Control and Concurrent Design Reuel Shinnar† Department of Chemical Engineering, The City College of the City University of New York, New York 10031

The paper presents a coherent methodology for the development of the design and control for new complex nonlinear chemical processes. The method uses concurrent design in which control considerations are an important part at all phases of the development and design. Control is used to compensate for the impact of the model uncertainties on the design. The method indicated is based on the author’s experience and summarizes his recent work focusing on safe scale-up and the ability to meet specifications, which minimize the development cost and in most cases eliminate the need for large pilot plants. I. Introduction This paper describes a consistent methodology for the design and control of complex nonlinear systems, especially chemical and other process plants. The methodology, which was developed by the author and Professor I. H. Rinard1-3 with the help of Professor Manfred Morari,2 is based on almost 50 years of personal practical experience in design and control in a wide variety of processes and the analysis (post mortem) of a number of large failures of new designs, all of them based on large pilot plants. The methodology provides a framework for the design of new processes, defining the information required for safe scale-up and minimizing both cost and time delay. It requires integration of control considerations not only into the initial design but also into the development of the process itself. Actually, the methodology prescribed here is not totally new. While different from what one finds in textbooks and the academic literature in design and conceptually different from what is normally called concurrent design, it is based on what experienced designers do intuitively. We only give it a systematic framework. A problem with straightforward use of available design algorithms is that they require a far more accurate model than is normally available for a first design or even for a second one. Furthermore, the majority of chemical processes is nonlinear. Linear control theory is in most cases sufficient for tuning control loops, but in order to work, it often requires that the process is physically linearized around a steady state by keeping, via suitable control, the process variables in a narrow range. There are, however, some fundamental reasons why design requires judgment and cannot be completely algorithmic. Both the design and the control capabilities of a process strongly depend on the specifications. Furthermore, the design depends on the information available and has to take into account the cost of obtaining additional information. At best, the design of both the process and the control strongly depends on the critical features of a process, which are often easy to grasp but hard to give an exact mathematical and algorithmic formulation to. † Tel.: (212)-650-6679. Fax: (212)-650-6686. E-mail: shinnar@ chemail.engr.ccny.cuny.edu.

Most published design methods start with a model. Very few academic reaction and control engineers can understand that at the time of the design there is very seldom a model for the more complex nonlinear part of the plant. I discussed the role of modeling,4,5 but it had limited impact on control. I distinguished between design models, kinetic correlation models, and learning models.4 Learning models are simplified nonlinear kinetic models,6,7 which should be well-defined and simple enough to provide an understanding about the features, behavior, and potential instabilities of specific types of designs and reactions. They cannot be directly used in a quantitative way either for scale-up or design of the control but provide critical guidelines for understanding the problems faced in the design. A modeling effort is important in process and catalyst development, but such models, while very helpful, cannot give a basis for a reliable algorithmic design. These models are derived for special purposes and must be accurate enough for the specific intended use. As long as we understand the experimental basis for these models, we can use them with confidence for the specific purpose they were developed for. I have sometimes encountered very complex detailed models, in the design of a new process, but on closer inspection, they lacked even the minimum information required for a reliable scale-up. We often cannot get the data in a laboratory or pilot plant to predict instabilities in a large plant. The literature on learning models6-9 provides us the information to test in the laboratory for any nonlinearities that could cause instability in the large plant. In such a case, a proper design can protect against any feasible instabilities. However, how necessary are detailed accurate models for design? I worked for 30 years in fluid catalytic cracking (FCC). The first FCC unit was built in 193810,11 without a pilot plant and based on one kinetic data point for the cracking. It worked well because it was designed to take care of the limited information available. When I came to the FCC 20 years later, only steady-state correlation models had become available. The first reliable kinetic models were developed in the industry in the 1970s and 1980s, while the first reasonably reliable model was published in the 1990s.12 All of the academic control work on FCC is based on a model13 that predicts a completely different nonlinear steady-

10.1021/ie0304715 CCC: $27.50 © 2004 American Chemical Society Published on Web 12/12/2003

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 247

state and dynamic behavior than a real FCC. It uses a coking relation13 for which the amount of coke formed for a given amount of oil cracked is a function of the catalyst-to-oil ratio, which is wrong for the present catalysts. We will show later that many present FCC units would not be able to operate with a catalyst that has the coking relation given in the literature;13 the first unit (1938), however, would have no problem. There are two more items related to my experience in FCC that put the need and origin of the paper into perspective. The first FCC was designed and built by great chemical engineers, Keith from Kellog and Lewis and Gilliland from MIT with very few data, definitely not enough for any model. They based their design on the known physics of the system, the heat and mass balances, approximate cracking reaction rates, feed properties, and some knowledge of the overall combustion rates. It was designed to be able to operate satisfactorily, but not optimally with any feasible catalyst or feed, and to be almost totally model insensitive. While mechanically it is inferior to the present designs, its control structure and conceptual design are far better than 70% of the present FCCs. In the last 10 years, industry has moved back to a similar design. Obtaining a reliable model, which is practically impossible in a small pilot plant, would have cost far more than the FCC, and the model would have been obsolete in 10 years when the catalyst changed. Later FCCs that simplified the design such as the Exxon IV, the UOP design (the most popular of which is based on my patent14), could not operate with a catalyst described by the coking relation cited above13 or with new catalysts that are in the developmental stage. I fully admit that I had no idea of control at the time and was interested only in a cheaper design. When one designs a new jet passenger plane or a fighter, one has to spend a lot of money and time in obtaining reliable model information. The example of the FCC, one of the most complex chemical reactors I have encountered, clearly shows that in the chemical industry this is not needed. We have learned to do very reliable designs with limited model information, as long as we get the essential information to safely design and scale up the plant. However, we have to provide the plant with sufficient control capabilities to take care of large model uncertainties. When I talk about model uncertainties, I do not mean parameter uncertainties but uncertainties of potentially important features in the structure of the model itself. The second lesson was just as crucial. There has been a very significant literature on FCC control. All of the papers presented control structures that would not work satisfactorily. The unit could crash if the inherent coking rate of the feed changed. Operators solved this problem by overriding the control manually. Several theoretical papers tried to provide solutions based on linear control theory.15-18 The worst control structure,15 while linearly the most stable, was nonlinearly unstable, practically guaranteeing to crash the unit.19 This will be discussed later in detail. These control structures were impeccable in terms of linear multivariable control theory, but in order for linear control theory to work, one first has to physically linearize the plant, by providing direct strong control for the variables (reactor and regenerator temperature in the FCC) causing the nonlinear behavior. When one normally speaks of linearization, one assumes that perturbations are small. In reality, this is

Figure 1. Cost of information for process design.

often not true. Luckily, only a few variables in each unit have strongly nonlinear impact. To be able to use linear control and ensure nonlinear stability, one has to ensure that those variables can be kept at a desired setpoint by either a control loop, which compensates for perturbations due to disturbances, or control actions. This requirement completely determines the 2 × 2 system studied by the author. If the system is physically linearized, linear control theory is useful for tuning and optimizing the plant, provided one does not change the control structure or eliminate manipulated variables based on linear considerations. The paper will summarize the methodology developed by us over the years to overcome all of these difficulties. II. Definition of the Problem To discuss a method for concurrent design and control of nonlinear complex processes, one first needs to clearly define the goals and the terms used. Our goal here is to ensure that the large plant can deliver a product that fulfills the product specifications at the required production rate and yield and in many cases allows one to change both the specifications and the production rate over a specified range. The plant has to be able to do so in the presence of specified disturbances in feed and catalyst properties. Furthermore, the method has to allow us to design with a clearly specified degree of information and preferably minimize this information as well as the need for pilot plants. The methodology has to take into account that reliable nonlinear models are not normally available at the time of the design. Pilot plants are expensive to build, even more expensive to operate, and significantly delay the start of the design (see Figure 1). For new products and materials, it is essential to minimize this time, which requires a design method that does not rely on pilot plants. Even if we build a pilot plant, a large fraction of the essential information is too expensive to obtain, and such pilot plants are preferably dedicated solely to confirm the operability and scalability of the intended design. Often one can choose a design that eliminates the need for a pilot plant. This is very important for new products, in which time is essential. We will define these concepts quantitatively later. There are many elements of a chemical plant such as heat exchangers, piping and intermediate storage systems, a simple distillation column, mass and heat balances, etc., for which one can get reliable models. However, for the critical units, such as chemical reactors, more complex separation processes, or integration

248

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

of both, we have only limited model information at the time of design and often a reliable model does not become available even during the life of the plant because it is too expensive. Furthermore, for a new design, model information obtained online is often only reliable if the new design is very similar to the existent one. Catalysts are constantly improved or changed, which can change the kinetic model. A useful design methodology has to clearly specify the minimum model information required, which is strongly dependent on the specific design chosen for a process. Thus, a tubular reactor may require less model information than a fluidbed reactor for the same process. Therefore, in designing new chemical processes, one has to balance the cost of more information (in time and money) with the extra cost required to compensate for the lack of information by a more expensive design. This has one strong inescapable implication. There is no hope and place for any totally algorithmic design methodology for either reactor or controller design. There are important subproblems in the design for which such algorithms are useful. Rosenbrock20 and I21,22 pointed this out long ago but with very little impact on the academic community. III. Practical Controllability and Scalability III.1. Controllability. For a control engineer, the expression “completely controllable” has a very clear operational and mathematically definable meaning, which is completely different from the way the same term is used by a plant manager. For him, complete, or even better satisfactory, controllability means that the plant can meet its specifications despite disturbances and despite changes in the feedstock and can do that for a specified range of product and process specifications. A plant can be noncontrollable in the rigorous definition and have satisfactory practical controllability. Examples are crystallizers and certain polymerization plants, which can operate satisfactorily despite smallamplitude limiting cycles. On the other hand, rigorous controllability does not ensure practical controllability. When specifications are changed as a result of competition, a well-controlled plant can cease to be controllable. III.2. Scalability. Safe scale-up or scalability means that one can ensure that the final plant or unit can meet all of the product and process specifications on the basis of data available at the time of design. It does not mean that the plant can exactly duplicate laboratory or pilotplant operation. It may not even duplicate a large pilot plant. Large plants will, for identical inputs and operating conditions, have different outputs. To meet the scalability criterion, one has to ensure either that these differences are small enough or that through changes in operating conditions one can still meet the specifications. In other words, one can compensate by control for the impact of the scale-up. Therefore, to be scalable and controllable, a design has to (1) meet the product specifications while permitting the change in throughputs, feed composition, and product specifications itself, (2) stabilize the plant and guarantee stable operation, (3) compensate for uncertainties (a) in the model, (b) in disturbances and feed properties, and (c) due to the scale-up and the imperfect model information. Item 3c is often neglected in the control literature but emphasized in our approach. The role of control is not just for changing specifications or operating conditions

Figure 2. (a) Impact of changes in inlet and process conditions on the outputs of the system. yˆ p: vector describing all outputs and important state variables. (b) Impact of scale-up on reachable space. ypi: output variables, product properties.

and compensating for disturbances; it also has a crucial role for compensating for the uncertainties in the scaleup. To meet all of the requirements, the plant has to be designed to be able to meet all of these control needs with margins for the uncertainties. Such control capabilities require significant investments, heaters, coolers, large valves, compressors, etc., that must be provided in the initial design. This can be costly but always pays off in better controllability and safer scale-up. The reactor and complex separation processes are seldom more than 20-30% of the total investment. Providing these control capabilities is always cheaper and more reliable than building large pilot plants. This is what we call concurrent design. It has nothing to do with optimization. The only thing I optimize when I design a new process is the chance of the designer to have a job when the plant goes on stream. In Figure 2a, we try to give a quantitative definition of controllability as defined by us. Any specific product specifications yp can be written as

yp(min) < yp < yp(max)

(1)

Specifications are either one-sided, such as yp < yp(max) or yp > yp(min), or two-sided (eq 1) but never have a single value. Equation 1 represents an ndimensional volume in the space of all ypi. (For a single specification, the volume reduces to a line and for two specifications to an area.) Figure 2a illustrates the concept for a two-dimensional yp. For the laboratory reactor, or a pilot plant, a specific operating point is a vector yˆ p in the space of all ypi. When

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 249

we change process conditions, the vector yˆ p changes, but the design determines the range over which this can be done. In the laboratory, we can find out what process variables have a significant impact on yˆ p. If we plot yˆ p over the whole space of changeable process conditions, we get a space ypi(reachable). To be controllable, the space of ypi(reachable) has to contain the space ypi(spec) and to be sufficiently larger. After scale-up, the space ypi(reachable) moves (see Figure 2a), and in a proper design, we have to be able to bracket and limit this motion to ensure that the space ypi(spec) remains in ypi(reachable). One can increase the reachable space by properly designed control, which is the object of this paper, and our design method focuses on defining control loops that do this effectively. In addition, one should use designs for which the movement of the output yˆ p by control can be predicted or at least bracketed from experience and laboratory experiments. Laboratory experiments were performed to evaluate the impact of the scale-up (due to mixing temperature nonuniformities, etc.) on the process and yˆ p.23,24 We can only do that for certain designs. The experiments and control strategy required will, therefore, strongly depend on the process, the design, and the specifications. During operations, any disturbances will move the operating point as well as the reachable space. The same is true for changes in throughput. Controllability means that by changing operating conditions (or the setpoint of our control loops) we can move the operating point back into the space ypi(spec). In these terms, the ability to change specs means that our reachable space is big enough to contain all of the required ypi(spec) (see Figure 2a). All of this has to be ensured in the design. Scalability can now be defined quantitatively (see Figure 2b). When a unit is scaled up, the vector yˆ p often changes for the same operating conditions from its value in the laboratory and so does the reachable space. Scalability means either that this change (motion in the space ypi) is small so that yˆ p is still in ypi(spec) or that we have sufficient control to move the output back into ypi(spec). III.3. Reducing the Displacement of ypi(Reachable) during Scale-up. The displacement occurs because scale-up changes the mixing and mass and heat transfer. These effects are the central subjects of chemical reaction engineering. There are also specific experiments to evaluate their impact. If a process is sensitive, one has to take measures to minimize those changes. Thus, for example, we should run a small pilot plant with the same size catalyst particles as the scale-up and design the flows in the unit as close as possible to either plug-flow or ideal mixing. If these deviations from the ideal case are kept small enough, we can bracket the impact. What one should not do is to rely on fluid mechanical modeling of complex flows coupled with kinetic equations to give reliable predictions for scaleup of a complex reaction or use residence time distributions for this purpose.24 Such simulations are very valuable as learning models but not for design. If needed, flows should be designed to be as close to ideal cases as possible. For cases where large scale-up is too risky, it is preferred to use smaller parallel units. Thus, we do not know how to scale up a complex process carried out around a mixing nozzle. The laminar flame TiO2 process scaled well from 1/4 to 1 in., but the 4-in. nozzle in the design could not meet specifications,

causing a billion dollar project (year 2000 dollars) to be abandoned. For such cases, one has to follow the first commandment of the Bible, “Be fruitful and multiply”. Use multiple nozzle building for the final design. III.4. Increasing the Reachable Space by Control. In a large number of processes (though not all), it is possible to strongly change yp(output) by changing critical variables in the process. Thus, in a combustor we can increase conversion by increasing the excess of air. In a hydrotreater for fuel oil, we can increase sulfur removal by increasing the hydrogen pressure, by increasing the temperature, or by reducing the feed rate (space velocity). These critical variables, which we call dominant, can be state variables or internal flows that are either directly dominant (space velocity) or allow control of a dominant variable (temperature, H2 partial pressure). The whole concept of control is to provide manipulated variables that can change the process outputs. Safe scale-up implies providing a large ypi(reachable). One can determine the dominant variables and their impact in a properly designed laboratory reactor or small pilot plant, which should allow independent variation of temperature, pressure, partial pressures, space velocity, and concentration of reactants. One can also add small concentrations of intermediate products to test for potential autocatalytic or autoinhibiting effects. Properly designed laboratory experiments should allow one to test for mixing effects on yˆ p. A small laboratory reactor allows one to check each variable for its nonlinear impact. This is generally hard in a large pilot plant and is not permitted in a large plant, which makes nonlinear online model identification, even in existing processes, often impractical. Identifying all of the dominant variables and their impact is the basic minimum information that we need for reliable scale-up and control. IV. Methodology for the Design of the Control IV.1. Role of Specifications. We noted before the primary role of specifications in the design, and even in the definition of our concept. Let us elaborate this item. Product Specification. One important property of the specification is that ypi(spec) is a space and not a single point; the width of this space plays a critical role because a narrow space leads to more difficult scale-up and control. The second critical property is the penalty for deviating from the specifications. Does this lead just to a cheaper sales price or does the product become useless as in many polymerizations? The third critical feature is the capability of the plant to rectify off-spec production. In refineries, for example, off-spec product can either be mixed out or reprocessed, while in highquality polymers this is not feasible. Many petrochemicals, if not pure enough, can be redistilled, recrystallized, etc., reducing the penalty for off-spec production. Process Specifications. In addition to product specifications, the plant has to fulfill certain demands required to achieve its economic purpose. This strongly varies from plant to plant and often has to be reconciled with the capability of the process, whereas product specs may often be prescribed by the market. Examples of process specs are production rates and the range over which they have to be varied during operation, yields, feedstock properties and variations, economics, etc. A design can meet all product specs, but if it does not meet

250

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

the spec for minimum yield, the design would be a total economic failure. Other types of process specifications are constraints that cannot be violated, such as temperature, an undesirable byproduct to be separated, recycle required, etc. In some plants, such as those for methanol, the main emphasis is on process constraints. In times of high prices, production has to be maximized. In times of low prices, yield has to be maximized and operating cost minimized. In methanol, production specs are not a concern because, with the present catalysts, they are easy to meet and methanol purity can be rectified in the distillation column. On the other hand, I was involved in the control of a polymerization plant for carpets. While this seems to be low tech, the specifications required to meet the requirements for spinnability and uniform dyeability were so hard to meet that all the emphasis was on keeping the molecular weight distribution within narrow bounds. Therefore, no variations in throughput or feedstock were permitted. I call this a spec-dominated control problem versus the constraintdominated design problem of the methanol production. Refineries used to be totally constraint-dominated. Gasoline or diesel specifications were a minor concern. They could always be met by blending additives (such as lead). Today, refineries have moved toward a specification-oriented control, somewhere in the middle between specs and constraint-dominated control. Such a switch is very difficult mainly because of management experience and paradigms. A designer must understand the total specifications of the plant in order to derive a realistic safe design. IV.2. Dominant Variables. The fact that, on the basis of experience, we can design and control many very complex systems with hundreds of state variables and very limited model information is based on the fact that such systems have a few internal variables that have a strong impact on stability and on the outputs of the system. By directly and independently controlling the variables, we not only can ensure stability but also move yˆ p in a certain range defined as ypi(reachable). These dominant variables can be either state variables or internal flows. Examples of such variables in a chemical reactor are temperature, pressure, space velocity (which is an internal flow), catalyst activity, compositions, etc. Not all nonlinear systems have this property, nor are all systems designable with minimum model information and a limited number of control loops, but almost all practical cases in chemical plants are. However, the fact that one can stabilize such a system and be able to change yˆ p over a significant range does not mean that the system is practically controllable because the reachable space may not include the space of ypi(spec). IV.3. Inventory Variables. In addition to dominant variables, we also have to control the so-called inventory variables, which are the variables that control the throughputs and the holdups in the system, as well as the hydrodynamic stability. Design of the control of such systems is well-known and almost always a linear control problem. It does not require here elaboration, but it has to be part of the design. An inventory variable can also be dominant, for example, throughput, which is almost always dominant but may not be available for control of yˆ p because it is used for overall plant control. Pressure can be dominant but does not have to be. For systems in the vapor phase, pressure must always be

tightly controlled for stability. However, the setpoint of the pressure is available for control of yˆ p. IV.4. Stability and Nonlinearity. An important problem in the design of complex nonlinear systems for continuous operation is to linearize and stabilize the system around a given steady state. There is a large literature as how to stabilize asymptotically unstable steady states or even operate chemical reactors in an oscillatory mode. There are very few such cases where anybody intentionally designed a reactor that way. A good designer or manager will generally follow the “Shinnar-Darwin Principle of Process Design”: Unstable processes will not (or should not) survive to become full-scale processes. So, why does crystallization so often exhibit cyclic behavior? The explanation is very simple. Those crystallizers were always stable in the pilot plant; otherwise, they would not have been designed the way they are. Regrettably, in processes involving nucleation, stability is very scale dependent. Most crystallizer models8,9 do not predict such dependence; they assume an ideal stirred tank. This assumption does not hold for processes with highly nonlinear nucleation rates because with large dimensions mixing time increases,25 and the short time in which an entering fluid element has a high supersaturation is sufficient for a strong effect. Similar scale-up problems exist for the majority of instabilities in many of the internal feedback processes because instabilities occur in a large unit but not in a laboratory reactor. An adiabatic reactor with complex nonlinear reactions can have a significant number of possible steady states (2n + 1). This is due to the feedback mechanism, when the temperature increase due to the release of the heat of reaction impacts on the various reaction rates. This feedback effect is absent or small in a well-controlled, isothermal laboratory reactor. Often our data are insufficient to prove how many steady states there could be or what is the exact region of permissible steady states.26 Small pilot plants do not solve this problem. While this problem sounds prohibitively complex, it is actually fairly simple to solve, without any pilot plant. There is a tremendous literature on potential instabilities and nonlinear behavior of almost any type of chemical reactors and reactions, as well as complex separation processes.6,27,28 For this, we have to thank the pioneers of this field, Profressors Aris and Amundson. By understanding the nature of potential instabilities, we can, by proper design, reliably prevent them from occurring or provide control to stabilize them even if we have no assurance that this is needed because these instabilities may not occur under the actual design conditions. This is far cheaper and safer than pilot planting, or extensive research, because getting reliable data for predicting potential instability is very complex and time-consuming. All of these instabilities, nonlinear as well as asymptotic, involve a nonlinear internal feedback process, which is always based on one or a maximum of two variables. By directly and independently stabilizing and controlling these variables, keeping them in a very narrow range, one can ensure the stability of the system. Consider, for example, a simple adiabatic chemical reaction (Figure 3). It has three steady states with the intermediate linearly unstable steady state. We know7 that the driving force of the instability is the reaction temperature. We do not know the boundaries for the existence of the stable upper steady state or the exact

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 251

Figure 4. Partial control scheme.

Figure 3. Adiabatic chemical reaction.

location of the unstable steady state. For example, there was a discussion of whether the operating steady states of the FCC are stable or not. If we design a reactor isothermally, it has not multiple steady states. However, such a design could be prohibitively expensive to operate. We can achieve the same goal by providing the FCC with substantial heating and cooling capability and independently stabilizing the operating steady state. This is far less expensive than that required for truly isothermal operation. We need only approximate information on heat balances and heats of reaction. In a laboratory reactor, one can also determine how sensitive the reactions are to temperature. One can unconditionally stabilize such a reactor and physically linearize its behavior by keeping all temperature variations and local temperature uniformities across heat exchangers below the critical value for nonlinearity. For an exothermic reaction, the maximum allowable temperature differential, ∆T*, is given by29

∆T* < RT02/E

(2)

where T0 is the temperature of the steady state, E the activation energy, and R the gas constant. We should also use the same criteria for all temperature differentials across heat exchanges or adiabatic temperature rises in an exothermic reactor to ensure linearized behavior. In the same way, we can identify from the literature all potentially strong nonlinear variables for most types of processes. We can then measure in the laboratory whether any of these variables have a significant nonlinear effect under the conditions in which the process operates. This nonlinear subset of variables is dominant by definition because the variables have a pronounced effect on yˆ p, and these variables should always get preference in the design of the control. There are two ways to ensure this dynamically: one is direct dynamic feedback control; the other is to keep these variables constant by providing large inertia in the design. Both are used. Thus, exothermic tubular reactors are designed with a large inventory of the cooling medium, eliminating any need for dynamic control.2,29 For reactors involving nucleation, one tries to provide some mechanisms for direct control of the nucleation rate, thereby breaking the internal feedback loop between nucleation and product properties.1,8 IV.5. Independent Degrees of Freedom. The maximum control of yˆ p, achievable in a given design, is by directly controlling all independently controllable dominant variables. We stress here independent because in a given design some dominant variables are

strongly related to other dominant variables. On the other hand, one dominant variable, i.e., the temperature, may be independently controllable at different values at two relevant locations in the reactor. This gives 2 degrees of freedom for control. In the same way, we may add a homogeneous catalyst at two separate locations in the reactor, giving 2 degrees of freedom for one dominant variable. For example, in an adiabatic plug-flow reactor, we can only fix the inlet temperature independently because the temperature profile along the reactor is a function of the inlet temperature and cannot be independently changed. We can stage that reactor and control the inlet temperature to each stage, or we can put a heat exchanger inside the reactor along its flow path to change the temperature profile. Thus, for a given system, we cannot change the number and type of dominant variables, but we can use a design that allows one to maintain some dominant variables at different values in specific sections. We call the number of independently controllable dominant variables the “practical degrees of freedom”. Controlling them directly and independently is the best we can do to achieve practical controllability, but it may not be enough. This requires an effective manipulated variable, for each dominant variable, provided at the design stage. This can be expensive but is almost always worthwhile. We note here the need for direct independent control, which fixes the control loops and does not leave place for any of the design methods published for linear control to couple manipulated and measured variables.30-32 However, we can substitute a dominant variable with an inferential variable strongly dependent on it. It is also important that the manipulated variable acts directly on the dominant variable, with as short a time delay as the design allows. IV.6. Structure of the Control System. In most of the literature, the design of the structure is based on determining the available manipulated variables, choosing a set of measured variables, and then designing a linear control structure based on various criteria.33-35 There are also criteria to choose between different available measured variables. This is not the approach taken here or in industrial practice because we start with the variable that neeeds to be controlled. The control structure of a plant or unit is in most cases designed with several layers (see Figure 4). The first layer is inventory control, to ensure fluid mechanic stability and to control the main feed flow into the unit, which is dictated by the need of the total plant and the market. One has to be able to do that and try to design these controls with time scales much shorter or longer

252

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

than the time of the process. The methods for designing inventory control are well-known and are not discussed here. The next level, the main concern in our design method, is the primary control, which is also the basis of the dynamic control. We base the control on the set of dominant variables discussed before, designing into the system suitable manipulated variables. Preferably, it should contain all of the independently controllable dominant variables, but depending on the process and the specifications, one can decide on a subset. Anyway, this subset has to include all of the dominant variables that drive nonlinearities and potential instabilities. The loops are chosen by the need to directly and independently control the dominant variables in a way predictable from laboratory experiments. This leaves very little choice for matching measured and manipulated variables. In the absence of a completely reliable quantitative dynamic model, which we never have at the design stage and very seldom later, predictability of the control action is critical for the design. Very often some of these loops are much slower than the time scale of the unit. We still need to design such a loop, though it cannot be part of the dynamic control, the goal of which is linearization, stabilization, rejection of disturbances, and allowance of changes in operating conditions. However, these slow control loops can be essential for the ability to meet specifications and should be considered part of the primary control. Thus, for example, fluid-bed reactors allow maintenance of catalyst activity by addition and withdrawal of the catalyst. This is an essential direct control loop, but it is very slow (time scale of weeks) and therefore not in the dynamic control. Some of these manipulated variables will be part of the basic design and just have to be matched with the appropriate dominant variable. Others may have to be added to the design to provide the control for a specific critical variable. An example is a cooler or heater to control a temperature, not as an inherent part of the design itself but provided for direct control of this temperature. Our emphasis here is to provide direct predictable control of each specific dominant variable. Furthermore, we have to be able to predict from laboratory experiments, available models, or a database the action of a manipulated variable on the coupled dominant variable and the effect of a change in the setpoint of the dominant variable on yˆ p. One also has to provide a sufficient range of action for the manipulated variable to provide a large enough ypi(reachable). This primary dynamic control loop itself can be cascaded. Thus, if the dominant variable cannot be reliably measured or its measurement involves long delays, we can substitute an inferential estimator and adjust its setpoint from the measurement of the dominant variable itself. In some cases, when a dominant variable has a direct strong impact on one of the output specifications, we can substitute this specification for the dominant variable into the primary loop, only in the tuning. We do not take into account the gain matrix or the total control matrix in the choice of the loops. Our freedom of choosing is usually strongly limited by the design options. We only care about predictable, direct, strong control of the dominant variables. However, understanding the gain matrix or the multivariable control interactions is crucial for the design of the

control algorithms but not for the structure. This results in a square control structure, which is essential for a system with limited model information because we need integral control in each loop. This primary set of control loops provides the basis of the dynamic control, the goal of which is to stabilize the unit, compensate for disturbances, and permit reasonably fast changes of the setpoints. The next and top level of this control structure is the supervisory control, which is based on steady-state nonlinear model and uses all of the setpoints of the primary and inventory controls to ensure the meeting of the specifications and provide the capability to change them. It is not necessarily a square matrix and uses experimental correlations and nonlinear models. In addition to the setpoints of the dynamic control, the supervisory control also uses other controllable inputs, which are not controlled in a continuous or frequent way but only when needed. This includes a change of the feedstock or a change of the catalysts, which can be essential control actions, to meet changing specifications. I experienced many cases where the optimization or control of the unit gave highly unsatisfactory results because it focused solely on the setpoints of the dynamic control, totally forgetting items such as control of catalyst activity or change of the feedstock, which is essential in many cases, especially in refinery. The two levels primary and supervisory control have to be clearly separated in their time scales, and to avoid interactions, the supervisory control should have a time scale at least 3 times larger than the slowest critical loop of the dynamic control. The supervisory control is critical to meet the specs, while the primary loops have to be designed such that the setpoints are useful in the supervisory control. Therefore, we have to have good model-insensitive information on the impact of these setpoints on yˆ p. On the basis of such measurements, one can derive a reasonably quantitative steady-state model correlation for the supervisory control. Because the supervisory control is critical, it is impossible to design a primary control without a clear understanding of the role of the supervisory control in meeting specifications. Thus, while linear control theory is useful for designing the algorithms for the primary control (provided we physically linearize the system), it is not useful for choosing the supervisory control loops. The neglect of the impact of the nonlinear supervisory control needs has removed academic research from the needs of real design. One example is the recent efforts to choose setpoints, for a given manipulated variable, from a large set of potential measured process variables by economic or dynamic optimization methods.30-32 This totally neglects the needs of the steady-state supervisory control, which is far more important for economics than any dynamics. In continuous nonlinear process plants, aside from the need to stabilize the system, there are very few cases where fast response is crucial or the speed of the dynamic control is the main criterion in the economic viability of the plant. If so, it has to be designed into the unit itself. Because our goal is to control a specific dominant variable, choice of another variable will not speed up the control beyond the time scale of this variable. For example, several papers17,18 deal with achieving a faster response of the regenerator in an FCC, in which the dominant variable for stability is the

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 253

regenerator temperature. The regenerator is designed with a large time scale to provide the inertia for safe control. Any other variable that responds faster than the regenerator temperature is irrelevant to the control, and if the regenerator temperature is not kept within limits, the unit will crash when exposed to a large nonlinear disturbance.17,18 The literature on process control gives far too high an emphasis on fast response. I am probably the only academic that has personal experience with the design of an air-to-air missile and an FCC as well as many other reactors. Those are different worlds. In an air-toair missile, as well as an aircraft control, fast response is crucial because there are only a few seconds available for hitting the target or landing a plane. In an FCC, a crash (losing the steady state or flooding the reactor if the reactor temperature becomes too low) has a larger penalty (several million dollars) than can be earned by any better dynamic control. A good designer will therefore introduce sufficient inertia (preferably at least 2 h) into such a unit to allow the operator to recover from failures in one of the loops, from any wrong control actions made by him or by a computer. If needed, the operator has to have time to contact a supervisor at night. There are several other important aspects in the design of the primary loop. While a dominant variable is chosen for the integral part of the controller, the loop itself can be much more complex and benefit from many other measurements, using the methods of modern control (feed forward, multivariable, internal model control, etc.). It is important to prevent sign inversion when the setpoints are changed. If all of the important dominant variables are controlled, then one can predict sign inversion from laboratory measurements or by simple mass and heat balances. Thus, if the temperature of a combustor is controlled by the air rate, the sign of the control action changes when the operating range changes from substoichiometric air to excess air, but if this ratio (a dominant variable) is controlled, it cannot happen. An example of how hard it is to predict sign change is the standard control structure in many FCC papers17,18,36 where the flue gas temperature is controlled by the air rate. Multiple complex sign inversions are shown in the operating range.26 The position of these sign inversions is model- and catalyst-sensitive and cannot be predicted from laboratory data. This is another reason the linear control theory is not useful for choosing the main structure in a nonlinear system, which is linearized around different operating points. For supervisory control, the operating points have to be changeable, and one has to minimize the chance of a sign inversion. In our method, the potential for sign inversion is predictable from laboratory experiments. IV.7. Minimum Information Required for the Design. Having defined our design method, we can now deal with a critical aspect: how do we determine the minimum information for a given design? This is so strongly situation-dependent that all one can do is to provide guidelines and a systematic approach to this central problem of design. It clearly depends on the specifications, the nature of the process, and the design chosen. Let us discuss each item separately. IV.7.a. Impact of Specifications. One has to clearly understand the product and the process specifications, the penalty for not meeting them, and the capability of

the process itself to meet them. The design of the first FCC, mentioned in the Introduction, with very limited information was made possible by the fact that such a process was urgently needed, and all it had to achieve was to reliably crack a substantial fraction of the crude oil to products boiling in the temperature range required for a gasoline motor. In other cases, specifications imposed are so tough that a specific process cannot meet them. Sometimes existing processes become obsolete if a competitor is able to achieve specifications that are not achievable by any control or modification in the existing plant. An example is the old American Cyanamid process for TiO2 from rutilium, which was made obsolete by the Dupont process from TiCl2. Therefore, we need a thorough understanding of all specs and the difficulties encountered in the laboratory or pilot plant to comply with them. IV.7.b. Nature of the Process. The most important part of the minimum information is to physically understand the nature of the process and the reason for doing it the way proposed. Models here are very helpful but not a substitute. Once we understand this, we can collect information on similar processes from the literature, which is very helpful if the investigator and designer lacks the personal experience. Obviously, one needs to demonstrate that at least at laboratory conditions the proposed process can meet the specifications. While a detailed kinetic model is helpful, it is not essential. It is more important to identify the dominant variables by using the large available literature and then measure their impact in the laboratory. We also have the tools to identify a priori if there are any dominant variables that could have a strong nonlinear effect and lead to multiple steady states or instabilities. While one might not be able to identify potential instabilities in the laboratory, one can always measure in the laboratory if these variables have a nonlinear impact, just as one can measure if a process intermediate has an autocatalytic or autoinhibitory effect. Understanding and having data on the impact of all potentially dominant variables on the process performance is the most crucial part of the minimum information for reliable designs, in many cases far more important than pilot-plant data. I encountered cases where, at the time of the design, all that was available were data from a pilot plant close to the operating point. Because the large plant will not necessarily perform like the pilot plant, this is completely insufficient. I do not mean that we need to have accurate data; knowing the magnitude of the impact is sufficient enough to bracket the effect. Very often one can bracket a potential effect from mass and heat balances and from a physical understanding of the process. In that case, one can provide a control with sufficient margins to take care of all uncertainties, even if those controller or design margins might not be needed. It still might be cheaper and faster (time is money) than obtaining the data, which sometimes is not even possible in the laboratory. For example, a solid catalyst or any reactor wall has an inhibitory effect on CO combustion and other free-radical reactions. This makes reliable measurements in a small laboratory reactor infeasible. However, we can bracket the potential impact on the heat balance and compensate for it in the design (see examples). The problem is not just understanding what the minimum information is but how much one can compensate for the lack of accurate information by

254

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

taking care of it in the design. Concurrent design, using the capability of control to compensate for the uncertainty, plays a crucial role. IV.7.c. Impact of Scale-up. The third type of minimum information deals with the potential impact of scale-up. Again we are not talking about exact quantitative information but the ability to bracket this impact, limit it, or compensate for it. This clearly involves understanding of items IV.7.a and IV.7.b. For chemical reactors, we have a large literature on the subject,23,24 but the reasons for these impacts are clearly definable, even if they may not always be quantitatively predictable. There are basically two factors for the potential impact of scale-up on the space of ypi: the change in mixing and transport processes during the scale-up and the temperature nonuniformities. There are several ways to obtain and reduce the required minimum information on the potential impact of the scale-up. One is experiments to investigate if backmixing, bypassing, or finite mixing time in the unit have a significant effect either on the reaction rate or even more important on the product distribution. Thus, in alkylation any significant deviation from ideal and instantaneous complete mixing will give a completely different and undesirable product distribution. The effect is so large that standard design does not rely on mixing alone but uses a large excess of one of the reactants, separating it from the product and recycling it to compensate the effect of imperfect mixing. One can measure such an effect by comparing in the laboratory the product distribution of a set of experiments in a plug-flow reactor with that obtained in a mixed reactor. Having determined the sensitivity to transfer processes and temperature nonuniformities, one can minimize the required information for safe design by choosing a reactor that simulates as close as possible an ideal reactor (a plug flow, a stirred tank, or a series of stirred tanks), bracketing the effect of small deviations. One can also choose a design with predictable performance. Thus, for example, a tubular reactor is much easier to scale up as compared to a fluid bed, though today we have fluid-bed designs that are very safe to scale37,38 and do not require a large pilot plant. IV.8. Sufficiency. The last and critical part of any design process is to sit back at the end and review the information obtained, the design chosen, as well as the potential uncertainties and evaluate if the design can meet the specifications with a sufficient safety margin. Here, simulation of the total design, using a model based on the dominant variables and the information obtained, and evaluation of different scenarios are very helpful. If the design can meet the specifications in a very robust way, we call this sufficiency. If one is not comfortable with the results or the results clearly show that the specifications are not safely obtainable, one has several options: (1) renegotiate the specifications, which often are arbitrarily determined (this is feasible when either the process is a needed alternative or the only one available); (2) increase safety margins or change the design to control some dominant variables at different values or within wider limits and then restart the procedure; (3) look for a new process, and if one is already available, drop the previous idea. V. Examples V.1. FCC. V.1.A. Overview. The FCC is a useful example to explain our methodology. It is a complex

Figure 5. Scheme of an FCC.

system for which no accurate model exists. The feed has a complex and changing composition, is hard to analyze, and can only be approximately characterized. Furthermore, the catalysts change significantly during the operation, by aging and metal deposition. On the other hand, new catalysts are introduced from time to time with different kinetic properties. This can strongly change the model, but a well-designed FCC can benefit from the better catalyst. The knowledge needed to design and control such an FCC can be obtained in the laboratory. An FCC, properly designed and operated, is a very robust and flexible system. I present now a mental exercise: pretend we want to design a first FCC, but contrary to 1938, we can use the knowledge on reaction engineering and design acquired in the last 60 years. Surprisingly, we end up with an almost identical design. Definition of Specifications. With a new process, one starts with the process goals and specifications. The goal is to design an atmospheric catalytic cracking process that can crack large quantities of heavy fractions of fuel oils into products with a lower molecular weight. The process should be cheap, robust, and flexible and able to accept different feedstocks and to vary the product composition. There are no tight product specifications, and for a first unit, there is no competition from existing alternative processes. V.1.B. Conceptual Design. Before we discuss a design, we need experiments to show that higher boiling oil fractions can be cracked into useful lower boiling fractions. This takes place when the oil is contacted with the catalyst at a temperature of 900-1000 °F. However, cracking experiments show that the catalyst cokes really fast and loses its activity within seconds. The amount of coke formed is large, 4-6% in weight of the total feed, while the catalyst required for the task is 4-10 times the weight of the feed. So, a reactor is needed that is able to (i) heat the oil very fast to 900-1000 °F (slow heating causes the oil to form more coke), (ii) contact the oil with fresh regenerated catalyst, and (iii) rapidly burn off the coke from the spent catalyst and return it to be contacted with other oil. This requires a technology that can contact and disengage large amounts of solid with liquids. Furthermore, the catalyst has to be alternatively exposed to an oxidizing and to a cracking environment.10,11 One of the simplest is fluid particle technology, which allows rapid movement of small particles by gravity and pressure differentials. The process (Figure 5) has two connected fluid-bed reactors, a cracking reactor (or riser), and a

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 255

regenerator. In the riser, the catalyst flows upward with the feedstock. The hot oil is fed to the bottom of the cracker. It both vaporizes and cracks, and the part vaporized continues to crack in the reactor while it is lifted to the outlet. At the top, the catalyst is separated from the vapor, stripped by steam from adsorbed hydrocarbons in a stripper, and then contacted in the regenerator with air to burn the coke formed during cracking. The hot, clean catalyst is recycled to the riser, and it also supplies the heat for vaporizing the feed and for sustaining the endothermic cracking. The catalyst circulation between the two reactors is driven by the pressure balance and the height of the catalyst bed in the regenerator. Because there are large amounts of heat to be transferred and the reactions are sensitive to temperature, the heat balance here is critical. Next are two principles important to design with limited model information: (1) Avoid using a completely adiabatic heat balance. One does that in combustion, where temperature is not critical as long as it is high enough. (2) Avoid, if possible, any uncontrolled recycle. All dominant variables in the recycle stream and the recycle flows themselves should be independently controllable. These principles completely define the conceptual design of the process. The catalyst recycle, which is the regenerator feed to the reactor and which determines the process outputs, is dictated by the reactor. We can only control the recycle from the regenerator. Furthermore, the reactor is endothermic and it is expensive to add indirect heat at high temperature, so the heat is supplied by the circulation of the hot catalyst. Therefore, the regenerator is designed to supply the reactor with regenerated catalyst in the amount and at the temperature required by the reactor. The only other way to change the heat balance in the reactor is to partially preheat the feed, but we are limited because high preheat would coke the feed. So, unless we want to complex the reactor design by providing staged injection, the control of the reactor is limited to catalyst circulation, catalyst inlet temperature, and the state of the catalyst (described by the coke on the regenerated catalyst and a vector describing the state of the catalyst). Because we are dealing with a catalyst promoting multiple reactions, if the catalyst ages or a new catalyst is introduced, the reaction rates are affected differentially; that is why the state of the catalyst is not described by a single constant but needs of a vector. The advantage of a fluid particle technology is that this vector can be independently controlled by providing the design with the ability to add and withdraw the catalyst or a mixture of catalysts. The other three parameters associated with the catalyst, temperature, coke level, and circulation rate, have to be independently controllable at the inlet to the reactor, and the design has to provide for this. That means the regenerator cannot be adiabatic because there is no a priori reason the generation of heat in the regenerator and the heat requirement of the reactor should match. For the desirable operating condition, a design requiring such a match would strongly limit the performance and the reachable output space. In addition, one would need a large amount of detailed information to ensure that such a design is feasible. The coke on the regenerated catalyst is controlled by the air rate and excess O2 in the outlet of the regenerator, but to control the temperature, the heat balance has to be separately controllable. One can remove heat by

Figure 6. Scheme of a modern FCC with a regenerator cooler.

a cooler (using pressurized water, which generates steam) and add heat by preheating the air used for burning the coke. A modified design to meet this need is given in Figure 6. The design of the first FCC, though mechanically different, had all of the capabilities of the design in Figure 6, which basically allows independent control of four inputs: catalyst circulation rate, air rate to the regenerator, cooling rate in the regenerator, and catalyst addition and withdrawal. Only the first three are available for dynamic control. V.1.C. Design of the Control Loops. Having outlined the design, we can now discuss the potential control. Inventory Control. A detailed discussion of the inventory control is outside the scope of this paper, but it presents no special challenges. One has to be able to control the feed rate, the pressure in the regenerator and in the reactor, the inventory in the catalyst cooler, the total amount of catalyst, and the pressure of the boiling water used for cooling. This has to be done fast and independently of the control of the dominant variable in the reactor. Dominant Variables. Despite FCC complexity, we can determine its potential dominant variables by understanding the nature of the reaction and using the literature on similar systems. To design with minimum model information, we do not even have to ensure that these variables are dominant. It may be simpler and cheaper to provide for their control. This is a general important concept in process development and design, to compare the cost of required information with the cost of compensating with design for the uncertainties. Reactor. The feed rate (Ffeed) is dominant for all reactors, but it is part of the inventory control and cannot be used to control yˆ p though its variation is bounded by the design and the desired specs. Feed properties are dominant for the performance and yˆ p, but the design should be able to allow their variation over a specific range. Because refineries have significant flexibility, this control plays an important role in the optimization of the total refinery. The reactor temperature (Trea) is dominant for FCC as for most reactors. The product distribution, the coke formed per pound of oil cracked, and all of the reaction rates are especially functions of Trea because the activation energies are all different.1,12,19 Furthermore, if the reactor temperature drops below a critical value (about 900 °F), the oil ceases to evaporate and the unit floods. Above a critical limit (about 1030 °F), undesirable thermal cracking takes place, coking the outlet pipes. Trea is dominant for yˆ p but also for stability

256

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

and nonlinearity and must be kept within narrow limits. The reactor is an endothermic plug-flow design. It has a temperature profile decreasing along the reaction path, completely determined by the inlet temperature. Therefore, one can talk about Trea as a simple dominant variable. The reactor inlet temperature is hard to measure, but the reactor outlet temperature is a viable inferential measurement. The catalyst flow rate (Fcat or circulation rate) is dominant in two ways. Fcat determines the space velocity (or the amount of catalytic sites) and the heat balance. Coke on regenerated catalyst (Creg) has a strong impact on the catalyst activity.1,12,13,19 In many catalysts, it also has an effect on selectivity, but this is not observed in FCC catalysts. For control, we only need to know that it is a critical dominant variable. The inherent catalyst activity (A) is a function of the catalyst composition, and one can buy catalysts with different activities and selectivities. The deactivation rate, even if the ability to control the activity is crucial, has a time scale in weeks much larger than that of the unit itself. Pressure (P) is potentially dominant because it affects the reaction rates and product distribution. However, for the present FCC catalyst, pressure within the operable range (15-150 psi) has a very small impact on the reaction rate and selectivity. This is a feature of these catalysts and requires extensive laboratory research in a first unit. However, pressure is a critical inventory variable, but because it is independently controllable, it is not crucial for our design. There are potentially four independent dominant variables in the reactor: Fcat, Trea, Creg, and the inherent catalyst activity. Ffeed and feed composition are important control variables entering the supervisory control. Regenerator. The regenerator temperature (Treg) is dominant and the only variable associated with nonlinearity. It affects the reaction rate and the reaction products, thereby the heat release per unit of coke burnt. Because, in the absence of a CO to CO2 combustion promoter, both CO and CO2 are formed, the ratio is a complex function of Creg and Treg. At temperatures below 1150 °F, the reaction rate becomes too slow, and the nature of the combustion changes. At high temperatures (above 1450 °F), the catalyst deactivates very fast because of recrystallization. Because the unit is partially adiabatic, it has multiple steady states, each with a different Treg. Therefore, Treg is a critical dominant variable for the stability of the FCC and has to be controlled within narrow limits. Coke on regenerated catalyst (Creg) is almost constant because the regenerator is nearly a continuous stirred tank reactor. It determines the reaction rate and the CO2/CO ratio, thereby the heat release. Partial Pressure of Oxygen [P(O2)]. The combustion rate is not only a function of Creg and Treg but also a function of P(O2) or, in other words, the excess air supplied. Coke on spent catalyst (Cspent) is the coke concentration on the catalyst fed to the regenerator from the reactor. As any reactor feed, it is dominant but not controllable. The catalyst flow rate (Fcat or circulation rate) is 2-fold dominant. First, it determines the heat balance because most of the heat released by the combustion is used to heat the catalyst. Second, it fixes the amount

of coke burnt in the regenerator [Fcat(Cspent - Creg)]. Because Fcat is used to control the reactor, it is uncontrollable in the regenerator. Air flow in the regenerator (Fair) is a controllable input and dominant in the regenerator. Pressure affects the reaction rate but is not available for control in the regenerator because it is used for inventory control. There are six dominant variables in the regenerator: Treg, Fcat, Creg, Cspent, P(O2), Fair, of which Treg is dominant for nonlinearity and stability. However, only two can be independently controlled because Fcat and Cspent are not available and Creg and P(O2) depend on Fair and Treg. V.1.D. Stability. We can evaluate the potential for instabilities by understanding the nature of the reaction. We know that plug-flow reactors with these types of endothermic reactions have no known instabilities. However, the reactor can only be operated within narrow temperature limits. This is a nonlinear stability problem. The regenerator is a partially adiabatic reactor and has multiple steady states, at least three but potentially five because it involves a consecutive reaction C f CO f CO2. When Treg is tightly controlled, each steady state the unit operates in is stable, as long as we provide sufficient catalyst inventory and independent control of the temperature over a sufficiently large range. There are also other potential instabilities. The CO to CO2 reaction has, at some operating conditions, cyclic instabilities, but this is compensated for by the large thermal inertia of the bed and has no impact. In a fluid bed, the mass transfer between the gas and solid phases could give multiple steady states,26,39,40, but that has never been observed in the turbulent regime. Because the only important variable for the operation and stability of the system is Treg, the temperature of the solid phase in the regenerator, such multiple steady states would be irrelevant if the catalyst inventory is large enough. Thus, by controlling independently Treg and Trea, one ensures stability regardless of the exact model properties as long as the setpoints are feasible. Thus, one needs to estimate the confidence limits at the heat balance and design the controllers to ensure that a desirable steady state is feasible. V.1.E. Independent Degrees of Freedom. Any design has a limited number of independently controllable dominant variables. This determines the ability of the control to change yˆ p. Not all independent dominant variables have to be controlled, and their number can be increased only by changing the process. For design purposes, we list only the variables controlled in the unit: Trea, Fcat, Creg, A, P, Treg, Fair, and P(O2). A total of eight, but not all are independent of each other. The dependence is determined by using heat and mass balances. So, if we fix Treg and Fair and leave Fcat and Cspent outside the control because their value is determined by the reactor, then P(O2), Creg, and flue gas composition are uniquely determined for a given catalyst inventory and catalyst properties. So, we are left with six independent variables, while pressure, not dominant for the present FCC catalyst, is separately controlled and the inherent catalyst activity is controlled by an independent slow control loop. All that is required in the design is the capability of withdrawing and adding catalysts.

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 257

We are left with four independent variables. Treg and Trea have to be given preference because they are crucial for the physical linearization and stability of the unit. From the rest, Fcat, Creg, P(O2), and Fair, only Fcat and Creg can be related by laboratory experiments to yˆ p. At fixed Fcat and Treg, Creg is uniquely related to P(O2), and because it is hard to measure, one can use P(O2) as an inferential estimate. However, their correlation is only relative for a fixed steady state, and the setpoint of P(O2) has to be cascaded to a slow measurement of Creg. While Creg can be predicted from laboratory experiments, P(O2) cannot, but once we independently fix Treg, we know at least qualitatively the impact of changes in Creg on reactor outputs or yˆ p. Thus, one can use an estimate for Creg, such as the flue gas temperature of the regenerator (Tfg) or P(O2), and see the directional changes of the setpoint by an overall steady-state model obtainable from laboratory correlation. We can also reliably estimate P(O2) from the temperature rise across the cyclones. Often one can simplify the problem by operating at very low Creg, where small changes in Creg no longer effect the reactor performance. This reduces the degrees of freedom for controlling yˆ p. So, the choice of variables is totally determined by the considerations of nonlinearity, stability, and predictability. We also have to design manipulated variables that can directly control these independent dominant variables. We can independently control Treg by a cooler, Creg by the air rate, and Fcat by the catalyst circulation loop. If Treg and Fcat are fixed, Trea can only be independently controlled by the temperature of the feedstock, which has a very tight range. We are saved here by one fact: Treg is dominant for the regenerator and for stability but not directly for yˆ p. For stable regenerator operation, it is enough to keep Treg dynamically within narrow limits. Also its setpoint has to be kept within the permissible range. Thus, the setpoint of Treg is available for controlling the steady-state value of Fcat. It is common practice to use the setpoint of a dominant variable to adjust the steady-state value of a more critical one. For dynamic control, we have three dominant variables, all of which should be controlled. Because Treg and Trea are critical for stability, the dynamic control has only 1 degree of freedom left, and because Creg is the other dominant variable in the regenerator that impacts it, it gets preference. We can only choose the inferential variable for Creg, i.e., P(O2). If Treg and Fcat are fixed but Treg is not independently controlled, there is no predictable relationship between P(O2), or another estimator, and Creg.19,26 This lack of correlation is one of the reasons that some control structures do not work,17,18,36 because in that case the proposed setpoints, either the air temperature across the regenerator or Tfg, are complex functions of the uncontrolled dominant variables.26 V.1.F. Structure of the Control System. Figure 7 is a schematic of a standard FCC, with the required manipulated variables. In choosing the structure, we determine the variables used for setpoints and which loop to put into the primary control structure and into the supervisory. A standard control scheme is given in Table 1. The control is determined by the design and requirement of direct predictable control of the setpoints. We have to control Treg independently of the heat balance. A cooler (or heater if the feed has a too low coke make) is the only way the design allows doing so. For Trea, we have two choices in the primary loop, Fcat

Figure 7. Scheme of a modern FCC with control. Table 1. FCC Control Scheme dominant variablesa

manipulated variables

Primary Control Loop riser bottom temperature catalyst flow rate regenerator temperature cooling flow rate partial pressure of oxygen air flow rate catalyst flow rate coke on catalyst catalyst activity feed composition

Secondary Control Loop regenerator and feed temperatures (through the heat balance) partial pressure of oxygen catalyst addition refinery and crude oil management

a All of the dominant variables together are used to reach the most important goal in control, to meet the specifications.

or Tfeed. The range of control is much larger for Fcat. The physical limits of Tfeed limit the change in the heat balance to less than 10% total, whereas Fcat can be changed by almost a 100%; thus, Fcat is preferable. Also the time scale of the reactor is in seconds, while the time scale of changes in Tfeed is 10 min or more, which is about 10 times slower than the time scale used for installing Fcat. Thus, the control scheme is practically determined just by understanding the requirements of the process. Another important deviation from the classical control literature is that the pairing of the dominant with the manipulated variables is driven by the need to provide direct control of each dominant variable. We need either laboratory experiments or other knowledge that the manipulated variable has a strong predictable gain on the paired dominant variable, but we take no account of the gain matrix. In fact, the control structure presented is along the negative diagonal of the gain matrix and becomes unstable when one of the loops is opened. For example, if Treg is not controlled but Trea is controlled by Fcat, a change in Fcat affects also Treg. At some operating points, the sign of the loop changes and the system drifts to another stable operating point,19,41 with a totally different setting of the uncontrolled dominant variable. To keep it stable with one loop, one may have to change the sign. Operating along the positive diagonal would give totally unpredictable control. If all loops are opened and manipulated variables kept fixed, the system is stable, but this is only a temporary solution. Other ways to stabilize the unit, not critical at the design stage, can be found when the unit operates and one has better information and can introduce more complex algorithms for multivariable dynamic control. However, the setpoints themselves cannot be changed because they are essential for the supervisory control.

258

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

The supervisory control (Table 1) uses all of the setpoints for a nonlinear, nonsquare model-based sample data control. Because the impact of changes in the setpoints on yˆ p are known from laboratory experiments, the information required is much less than that for a dynamic model. A proper choice of the setpoints is crucial. Once we build a model, we can estimate any other parameter; therefore, some other setpoints may have advantages. Here one faces a complex problem; the accuracy of different estimates in an imperfect model can differ by magnitudes. The impact of temperature on the combustion rate of coke in the regenerator is measured in a laboratory. If we use the laboratory data to build a complete model for the FCC, it will also predict the impact of a change in Tfg or ∆Tcyc. This prediction is obtained from the complex model, and an inaccuracy not only could cost a large error in the prediction but also could predict a wrong direction of the change. That can never happen for the relationship between the coke combustion rate and temperature. That means, with a slight change in the FCC model, increasing ∆Tcyc can either increase the combustion rate or decrease it.19 Tfg, a variable used as a setpoint, has complex input and output multiplicities,26 and the sign of the control loop can easily change. This is very sensitive to coking kinetics relative to cracking kinetics. This is not true for the proposed control scheme because it is based on laboratory results and simpler direct physical dependence. This is a crucial issue that should be given much stronger attention when choosing linear control loops for nonlinear systems. The supervisory control also contains slow loops with setpoints. The most important are the inherent catalyst activity and catalyst properties. The inherent catalyst activity can be controlled by the addition and withdrawal of the catalyst. In a fluid bed, catalyst properties can be changed by changing the catalyst, for which today there are a wide variety of options. There are also catalysts that have impact on yˆ p. Thus, a small percentage of HZSM-5 increases light olefin production, but this has nothing to do with the design itself. In fact, in the first design, only one catalyst was available. Model identification from the nonlinear supervisory control is a challenging research area. Here, even simplified nonlinear steady-state models12 can be more helpful than simple statistical correlation models,4 but this has only a little to do with the design. The role of the design is to provide the required manipulated variables and control capabilities. To do so, we have to understand the dominant variables that are the basis for a solid correlation model. In that sense, our method is model-based. V.1.G. Minimum Model Information. Now, we can determine the minimum model information required. The steps from V.1.A to V.1.C are not separable or successive; it is an integrated reiterative process, and to be able to explain it, I had to separate each issue. To design a process, one has to have either a catalyst or some concept of what to do. One designs a reactor for a catalyst. We are still far away from being able to design a catalyst for any thermodynamically feasible process. The assumption in this methodology is that one has conceived a process and checked to see if it is thermodynamically possible and economically viable. Few people realize that the economic viability of a new idea can be checked before one has any data. A methodology

for evaluating designs before they become feasible has been discussed.42-44 I looked into consulting at a large number of processes at a very early stage of attractive ideas and found that in 80% of the cases they were not attractive, even if they would work well. The next step is to prove the concept experimentally by looking at the design problems. This requires a few things: first, to show that a catalyst can crack oil, which was demonstrated in the Houdry process,12 a single laboratory experiment with a catalyst suitable for a fluid-bed reactor; second, to establish that fluid particle technology works on such a scale (this was proven in German’s fluid-bed coal gasifier, the Winkler process, the basis for the FCC technology10,11); third, to get data for such small catalyst particles from cracking and combustion. Getting ballpark numbers of the reaction rates for both units was enough to check the viability of the design. Because the catalyst was cheap and the reaction rates were high, they did not worry about the catalyst life. A design today would require establishing control of product properties, acceptable catalyst consumption, and the ability to operate on a wide range of feedstocks. The first design and control structure was conceptually identical with Figure 7 though it was mechanically different. The regenerator was a riser reactor, and the catalyst transfer was achieved with airlifts. With fluid-bed technology, all of the questions raised in the paper can be answered in a small pilot plant, practically a laboratory-scale plant, but not all kinetic information is accessible in a small pilot plant at acceptable time scales and costs. Thus, the design approach focused on the early available information, which is sufficient for the design in Figure 7. Crucial information hard to obtain in a pilot plant is related to the catalyst itself. The advantage of fluid particle technology is that the catalyst can be replaced and kept at constant conditions, but this advantage has a strong drawback. In a fixed-bed reactor, we can demonstrate catalyst properties over a long lifetime in a small laboratory reactor. While a fixed bed gives a worse performance, it eliminates this specific scale-up risk. A fluid bed obtaining steady-state catalyst aging data requires an experiment 3 times as long as the catalyst life. There are ways to simulate the impact of aging, but it is risky. The main risk is how a severely aged catalyst, 2 or 3 times as old as the average catalyst life, behaves in the unit. If this catalyst just has low activity, there is no real risk, but if aging changes the selectivity, this can require much higher catalyst addition rates. This is accessible by proper tests but requires an investigation. This is a minimum information problem for the process viability. In FCC, undesirable selectivity changes are not due to aging but to deposit of impurities, such as nickel and vanadium, on the catalyst, promoting undesirable side reactions. For such a case, a fixed bed has no advantage; in contrast, the ability of the constant replacement is a great advantage for the fluid bed. For the FCC, there is another important kinetic information not accessible in a laboratory reactor. While the rate of coke combustion is measured in the laboratory, the flue gas composition, and therefore the heat balance, is a strong function of the CO2/CO ratio. Combustion to CO generates less than half the heat, but this reaction is complex. The coke combustion

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 259

generates CO and CO2; the reaction is a known function of temperature.12,26 In the absence of any walls or catalyst surfaces, combustion of CO to CO2 is almost instantaneous. The CO combustion is a free-radical reaction. If the distance between the walls or catalyst particles is less than the average free path of a radical, the free-radical propagation stops at the catalyst surface and the reaction rate strongly decreases. Because a laboratory reactor has walls, any reliable kinetics measure requires at least 4-6-in.-diameter reactors. A fluid-bed pilot plant with the same catalyst densities would be very tall and prohibitively expensive. Any impurities deposited on the catalyst could inhibit or promote CO combustion. The difference in total ∆H could be about 20%.26 Thus, we cannot reliably predict a heat balance for the regenerator, which is a critical parameter in an adiabatic reactor. In the design of Figure 7, this is irrelevant; all we need is enough cooling capacity to take care of the uncertainties. Introducing a cooler may cost, for a large FCC, $10-20 million. A reliable heat balance for a first reactor would cost $50-100 million for a large pilot plant and take more than 1 year. The value of a control with maximum capability is much larger than the cost of the cooler in the incremental control. This is an example of how design can reduce the cost of the minimum information required. It also shows a big advantage for a first design to find an application when the scale of the design is not very large. The cost of the pilot planting is independent of the plant size. The cost of a first demonstration plant is not, and the plant has to have a viable operation and pay for itself. The only critical information required for the regenerator, the rate of coke combustion as a function of temperature and P(O2), is easily obtainable in the laboratory. For the reactor, we need the impact of the dominant variables on yˆ p and Cspent. In the design of Figure 7, the amount of coke formed on the catalyst is not critical for yˆ p. The important properties are conversion, and the yield of different fractions (gasoline, fuel oil, C3 and C4, CH4, H2, and coke) as a function of Fcat, Tmix, Creg, and the inherent catalyst activity. An example of information that we can get in a laboratory reactor is given in Figures 8 and 9, where Fcat and Treg are shown to have a strong impact. The residence time of the catalyst/oil mixture has an impact but less than expected because the catalyst deactivates very fast.12 The model12 is based on data obtained after many years of operation and research by Mobil. Still, this model could be highly inaccurate for a new catalyst. However, for the design in Figure 7, one could obtain the required information by a few laboratory runs and the design would work regardless. V.1.H. Minimum Information for Scale-up Related to the Design. The regenerator gives no problem. We know how to design a well-mixed fluid bed with a good catalyst, but for a large unbaffled reactor, there is backmixing, bypassing, etc. A fluid bed, operating in the turbulent regime, has a high catalyst density in the dilute phase (0.5 lb/ft3) and a large recycle through the reactor. This reduces the scale-up risk because air that bypasses the dense bed will react in the dilute phase. It also eliminates the risk of multiple steady states due to heat transfer between the two phases. The circulation time through the dilute phase and the cyclones is on the same order as the residence time in the dense bed.

Figure 8. Effect of Fcat on conversion for an isothermal reactor: (a) clean catalyst at different Trea values; (b) reactor at 960 °F at different coke levels.

Figure 9. Impact of the residence time on conversion for an isothermal reactor at 960 °F and a Fcat/Ffeed of 8.3.

Because the heat transfer in the dilute phase is very high, the two phases are in equilibrium and the difference between gas and solid temperatures is quite small. However, the design has to ensure that the catalyst is well distributed so that the coke concentration is fairly uniform. Two papers39,40 assume a regenerator with a large temperature difference between the phases, but in modern fluid beds, which are turbulent, such differ-

260

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

ences do not occur. Even if this knowledge is available for a long time, not all present designs have this property. The reactor is more difficult. First, it is adiabatic, but the laboratory reactor is isothermal and a real adiabatic pilot plant is large and expensive. We can estimate the effect of the nonadiabatic nature from the results in Figure 8. Furthermore, the laboratory reactor is plugflow, which is hard to achieve.23,24 It is difficult to baffle or stage a riser reactor at these temperatures and flow velocities. All we can do is ensure good flow distribution at the bottom of the reactor. Also, the catalyst flow to the reactor is large and hard to distribute uniformly in a short time. The mixing time has the same magnitude as the residence time. Because the vapor flow is due to evaporation and cracking, the bottom of the reactor is not completely mixed and will have strong local differences in temperature and composition. One can improve the mixing by introducing a gas from the bottom of the riser, either steam, propane, or butane (which are quite inert at this temperature), to create a fluid bed and reduce the mixing time. A necessary minimum model information is to bracket the impact of nonuniformities by laboratory experiments at high temperatures, different catalyst/oil ratios, and residence times. This is necessary if product specs are important. There is a shift in product distribution because high temperature promotes the formation of C2-C4, olefins. These are valuable for petrochemicals and alkylation. If the specs were critical, the design would be very risky, even with a large pilot plant. The only way for a safer design would be to build a smaller reactor, say one-fifth the size, and use multiple parallel risers. This is not very expensive compared to the total FCC cost because the reactor is only a small fraction of the total cost. Alternatively, one would have to look for designs that scale better. This is the realm of reactor design.23,24 V.1.I. Practical Experience and Alternative Designs. The first FCC (8000 bbl/day) was built in 1938. It went on stream in 18 months, including the development of the concept, experiments, design, and construction. No pilot plant was built, and the unit performed satisfactorily immediately. One year later, 10 more units were in operation. The companies involved (Exxon, Shell, and Texaco) would have a hard time today to build a pilot plant together or pass a management decision to build one in 18 months. The mechanical design of the unit and the reactors strongly changed, but as long as the same control schemes were used, all designs performed well. After about 20 years, it was found out that one could operate the unit without the cooler and new designs came in, first without the cooler and then without dynamic control of the catalyst circulation. This design, cheaper and easier to maintain, was widely adopted.11 At that time feeds were mostly light crude oils with a very constant composition. There was no need to use a higher boiling fraction or to increase conversion because lead addition controlled the octane of the gasoline. There were limited specs on the gasoline composition. Later, feed became heavier, and decreased gasoline consumption required cracking of higher boiling fractions of the crude oil. This made the design without a cooler and without catalyst circulation inadequate because it was not able to achieve the desired conversion. The first change was to reintroduce control of the catalyst circulation.10

In an adiabatic unit, when the inherent coke make of the feed increases, the only way to heat balance is to reduce the coke make by adjusting the catalyst activity.19 Present catalysts (Y zeolites) are self-adjusting because a higher Creg reduces cracking and coking activity, while an HZSM-5 type catalyst is not selfadjusting because the activity is much less sensitive to coking and this unit would not be operable with such a catalyst without a cooler. However, a self-adjusting catalyst strongly reduces the capability to control yˆ p. It sounds so simple that the fact that it was not recognized sounds dumb. I was one of the many “dumb” people who worked for years on the problem, never recognizing it. One reason is a tendency for control and reaction engineers in industry that, if one needs to improve a process or solve a problem, one always tries to do that without introducing significant changes in the unit, and improving the control often means adding a manipulated variable or, in other words, a design change. This philosophy is so ingrained in the system that one does not even try to step out of the process design and look at the whole problem. I only did this when I used the FCC in my academic research as an example for the method. Nobody (including myself) recognized that the selfadjustment of Creg is essential for stability in an adiabatic unit. The purpose of controlling excess O2 [or Tfg or ∆Tcyc, which are inferential estimates for P(O2)] was to keep Creg constant despite the fact that more coke is formed. However, in an adiabatic unit, one is not allowed to control Creg. To operate an adiabatic unit at a constant Trea requires that the amount of coke burnt off in the regenerator stays fairly constant. Otherwise, one would generate more heat than the reactor needs. Because coke make is related to conversion, an increase in the inherent coke make of the feed requires one to lower conversion enough to keep coke make constant, which in a self-adjusting unit is achieved by increasing Creg. If the air rate is controlled by Treg, the amount of coke combusted stays fairly constant. So, if more coke is formed, there is no air to combust it and Creg will increase. Controlling Treg keeps the coke burnt and the heat generated equal to what the reactor requires. If we want to keep Creg constant, we have to increase the air rate; at constant Treg, controlling the air rate by the excess O2 in the flue gas is a good way to do that. ∆Tcyc allows an easy measurement of excess air. In the presence of catalyst, CO combustion is inhibited. Once the catalyst is removed in the cyclone, the oxygen instantaneously reacts with CO, and therefore the temperature rise across the cyclone is directly proportional to the excess O2 in the flue gas. One can also use Tfg for a less accurate estimate. This example illustrates that a complete physical understanding of the process is essential for my design and hard to obtain from a linear model. In an adiabatic reactor, there is no place for the extra heat, which would crash the unit. Luckily, there is a way for a self-adjusting catalyst to still keep the heat evolved constant. The amount of coke combusted, and therefore Creg, is not just a function of excess O2 but also of Treg, which controls the rate of coke combustion. Therefore, if excess O2 increases, Treg has to decrease to keep coke make constant. If in a given catalyst that does not happen, the unit would crash, and very few catalysts, aside from the typical FCC cracking catalysts, have this property.

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 261 Table 2. Different Choices for the 2 × 2 Control Matrix

case

first pairing (manipulated f dominant)

second pairing (manipulated f dominant)

1st 2nd 3rd 4th

Fair f Treg Fair f Tfg Fair f ∆Tcyc Fair f ∆Tcyc

Fcat f Trea Fcat f Trea Fcat f Treg Fair f Trea

If Treg is lowered too much to keep constant the amount of coke burnt (when the inherent coking rate increases), the combustion rate becomes so low that the catalyst residence time in the regenerator is too short to combust the needed coke and the catalyst cokes up, causing the unit to crash. When a refinery was operated at constant low coking feed, there was no problem, but when the FCC was used to process higher coking feeds, it had constant crashes. A good operator overrode the control, keeping Treg constant by readjusting the setpoint of ∆Tcyc whenever Treg decreased, while a less experienced operator had frequent crashes until air control by Treg was reintroduced. This instability started a whole cycle of papers trying to explain it by linear stability models. This sounded sensible to me at that time. Now I realize that the instability is simply a loss of a viable steady state, which is different from a linear instability and is not predictable from linear models. The first theoretical paper15 used optimal control to derive a control based on a different 2 × 2 matrix; see Table 2, in which all setpoints are regenerator variables. Linearly, it is a very stable control for the linear model designed. The scheme controls Treg and Creg. To reduce conversion at constant Creg, the reaction rate is lowered by decreasing Trea. No law in catalysts or thermodynamics says that this should happen, but the self-adjusting property of the FCC catalyst causes Trea to decrease. This is even worse than decreasing Treg. If Trea becomes too low (the required decrease is quite small1,12), a heavy feed no longer evaporates, and the unit not only crashes but may flood, creating a far more expensive mess than a simple crash. Luckily, nobody ever tried this scheme. While I did not realize this, I realized another severe drawback. In any complex reaction, Trea controls not only conversion but also product composition and properties. One cannot control a reactor with a fluctuating temperature. The goal of control is most of all to meet specs and keep the reactor stable; fast response and dynamic optimization are secondary goals. This has been sometimes neglected.15 If the manipulated variables for dynamic control are air rate and catalyst circulation rate; the only stable scheme is Fair f Treg, Fcat f Trea.1,19 Linear control theory allows good tuning of this scheme. One can also use multivariable control to improve the response, but the control of the setpoints has to be Treg and Trea with the above coupling. This is important for nonlinear stability and for supervisory control, which is essential in plant control. However, I know of no way to derive this conclusion from the linear control theory. This scheme also violates another linear design concept. The pairing is along the negative diagonal of the gain matrix. The unit will become unstable if the regenerator is put on manual control with the reactor staying on automatic control because this reverses the sign of the gain in the loop Fcat f Trea.41 Our 3 × 3 scheme has the same problems, but the only option would be to give up dynamic control. Operating along

the positive diagonal of the gain matrix would make the unit inoperable, introducing unpredictable model-sensitive interactions and making reactor control unacceptably slow (hours instead of minutes). Thus, while most processes are less complex, this example illustrates what can go wrong in applying linear control theory to determine the control scheme of nonlinear processes. This does not mean that a linear gain matrix is not useful. It provides very valuable insights. For example, it allows one to realize from steady-state data that a control structure becomes unstable if one loop is opened. As in our example, one might have no choice because the positive diagonal has the wrong time constant. Furthermore, the control action in the desired variable is no longer directly predictable. The problem is not just the linear gain matrix. The whole concept of choosing the loops in a multivariable control system from the total matrix by linear control theory or optimization by linear control theory has little relation to actual practice. A designer starts with a variable that needs to be controlled either directly or by inferential control; he then finds a suitable manipulated variable to control it. This involves a large expense and often a heated discussion if it is justified; the goal of this expense is steady-state control to meet specifications and stability requirements. To justify the expenses, one has to have clear evidence that this loop will do what is required in a predictable model-insensitive way. There is little place for any change in the primary loop once built aside for better measurement devices or a better direct inferential variable. There is a place for a better multivariable algorithm preserving the same setpoints, but the example illustrates that the choice of the loop itself cannot be made solely based on linear consideration. This applies to many other uses of the linear control theory. In our method, the choice of the pairing and the setpoints is obvious. Treg and Trea are the two dominant variables driving nonlinearity and instability and have to be kept within a narrow range to physically linearize and stabilize the process. Because our method requires direct and predictable control of all dominant variables, the pairings are obvious. This would not be feasible if the pairing reversed to be on the positive diagonal of the gain matrix. However, while the structure Fair f Treg, Fcat f Trea gives stable operation, it has a large economic penalty compared to the 3 × 3 structure discussed before. The need to keep the unit adiabatically heat balanced reduces the capability of the supervisory control and the accessible product space. Figure 10 compares the accessible product space for the two units. The accessible space in both cases can also be designed by changing the catalyst activity and even more the catalyst properties, but the 3 × 3 scheme will always give a larger accessible space and pay for itself. Let me illustrate this by the numbers. The incremental investment for the 3 × 3 scheme over the 2 × 2 is about $30 million for a 100 000 bbl/day FCC. For a new unit, the pilot plant to prove the 2 × 2 scheme would cost more. However, because the design was based on an operating unit, we only have to look at the penalty of a smaller accessible space, which is between $30 and 60 million/year for a single unit. For most processes, designing for a larger accessible space reduces the risk of a new design and returns the investment by better control. No advanced algorithm can match these gains. In contrast, maximum control of the dominant variable will make optimization schemes far more effective.

262

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

Figure 10. Impact of design (comparison of achievable conversion and wet gas for a unit with a cooler and without): (A) adiabatic; (B) nonadiabatic. The nonadiabatic unit gives much better conversion and yields and better control of stability and specification.

Most new FCCs use a nonadiabatic design with cooler to independently control Treg and Creg. Many new designs also utilize a CO combustion promoter that allows one to operate with full CO combustion, requiring a somewhat different control scheme, based on the same principles.19 V.2. Design and Control of a Continuous Crystallizer. V.2.A. Overview. A second example is the design and control of continuous crystallizers. I was involved in this process several times in the last 40 years. It is different from the FCC in many ways. First the technology of crystallization is much older, practically prehistoric. However, more important, while the FCC is a very specific process, crystallizers deal with a large range of materials with widely varying properties and specs. Still, most crystallization processes share some features. They all involve a nucleation step followed by a growth process. In most cases, the nucleation is a complex, highly nonlinear process. A variety of polymerization processes also share this feature. Another common feature is that it is nearly impossible to get a reliable nonlinear model for nucleation and often even for growth. However, we can design and control them well with some minimum model information, which makes this an excellent example for our design. While for many chemical reactors, we do not have a reliable detailed model at the time of design, getting such a model for an available unit is quite feasible, only costly and often not justified. Because many of my colleagues have trouble with this statement, let me clarify it. The term modeling is quite generally used. In a former paper,4 I tried to define the different types of models. Any competent engineer will use modern modeling tools for most reactor design. A heat and mass balance and overall kinetic relations are also a model. The question is accuracy and reliability. Luckily, there are many simple processes for which such models are reasonably accurate. It is the exceptions that are the problems and which a good designer has to be able to recognize and deal with. An FCC is quite a simple process, and there were models in 1970, but they did not explain the instabilities. The original design and control structure as well as the proposed one is modelinsensitive and can operate with a wide range of

catalysts. The adiabatic design is model-sensitive, and despite the available models, many companies have not yet realized that some of the catalysts that are being developed would not function in an adiabatic unit.45 The kinetic models we use for catalytic reactors are not fundamental and do not include the actual reaction steps on the surface of the catalysts. However, they can reliably describe the impact of all process variables on the output. Thus, within the domain of the experiments, they are reliable correlation models. However, this is not true for crystallizers. Nucleation is such a complex process that it is practically and economically not feasible to get a real, reliable kinetic model. I coauthored the first quantitative model for continuous crystallizers introducing population balances to model particulate processes. When I did this, I believed that this is an approximately correct physical model. However, later I learned how far it is from reality. For example, one underlying control assumption, namely, that nucleation and growth are solely a function of supersaturation, turned out to be inherently incorrect. Because it is still useful as a learning model and to scale such processes, this example illuminates both the limits and the advantages of modeling and the real needs of design. So, the challenge is how to develop a design, which does not require the availability of detailed information for the nucleation step. We will focus here not on a complete design but on showing how this can be done. The concepts discussed are useful not only for a wide range of crystallization processes but also for many polymerization processes, such as dispersion polymerization,46,47 which have the same general features. V.2.B. Learning Models for Crystallizers. In many linearized reactors, it is possible to get reliable phenomenological models that, while correlation-based, describe the observable macroprocesses very well, at least within the space of experimental data, and translate this to predict reliably the output of the process. For processes involving simultaneous nucleation and growth, this is much more difficult. Let me explain the difficulty by looking at some simple models of nucleation and growth. The first quantitative dynamic model based on population balances was published in 1967 by Professor Katz and myself,8 in the context of our work on modeling population balances for particulate systems. It is based on a material balance for the solute and a population balance for the crystals.

d[c + (1 - )F] c0 c + (1 - )F ) dt θ θ

(3a)

Here, θ ) V/ω is the residence time, ω is the feed rate, V is the crystallizer volume, c is the solute concentration in the crystallizer, c0 is the feed concentration,  is the fractional volume occupied by the solution, and F is the crystal density. We have assumed that the feed does not contain any seed crystals. A balance of particles of size r is given by

∂f(r,t) ∂{G[(c-cs),f(r,t)],f(r,t)} + ) ∂t ∂r B[(c-cs),f(r,t)] δ(r) -

f(r,t) (3b) θ

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 263

Here, cs is the saturation concentration of the solute at the given temperature; f(r,t) is the particle size distribution at time t such that f(r,t) dr is the density of crystals (number of crystals per unit volume) having radii in the range r, r + dr; B[(c-cs),f(r,t)] is the nucleation rate, i.e., the number of new crystals formed per unit time per unit liquid volume; and G[(c-cs),f(r,t)] is the growth rate dr/dt of an existing crystal. It is assumed that new crystals appear as a Dirac delta function δ(r), or that all new crystals are formed at a nominal size r ) 0. To avoid confusion, I have to point out that the nucleation rate B in eq 3b is not what a chemist or physicist considers nucleation but the physically observable rate of formation of new particles above a critical size (rcrit). It further assumes that rcrit is small compared to the average particle size and that particles above rcrit only disappear by being washed out of the system. Similar considerations apply to G(‚), which is not necessarily simple deposition of solute to the surface by diffusion. Both B(‚) and G(‚) could have very complex multistep mechanisms involving conventional nucleation, diffusional growth, agglomeration, etc. This is similar to what a chemical reaction engineer does when dealing with catalytic chemical reactions. He models reaction rates observable in the macroscale, each of which could involve 3-50 elementary reactions. If all we have is macroobservation, there is no way to determine the actual reaction mechanisms, leading in the microscale to the overall reaction rate.5 The same applies even more so to crystallization. However, in our first attempt we assumed that B(‚) and G(‚) are only functions of supersaturation. I only later realized that this is inherently wrong because very small crystals as formed by the Volmer nucleation are unstable in the sense that they agglomerate, and the agglomerated crystals bind to each other by solute deposition. Let us first discuss the simplest model. Both B(‚) and G(‚) are complex functions of supersaturation (c - cs), size distribution f(r,t), and many other often unknown factors such as agitation and mixing. One cannot determine B(‚) and G(‚) from simple experiments. In their simplest forms, B(‚) and G(‚) are functions of supersaturation (c - cs) only, although this is rarely found to be true in practice.9

B ) kB(c - cm)n

(4a)

G ) kG(c - cs)

(4b)

where cs is the saturation concentration of the solution in equilibrium with the inlet phase, cm is the so-called metastable limit below which no nucleation occurs, kB is the kinetic constant for the nucleation rate, kG is the kinetic constant for growth rate and n is some power. Equations 4a and 4b are well-known simplifications of the more complex theoretical nucleation expression. While there is no rigorously defined metastable limit, the simplified concept is still approximately correct even for the more complex models. Below a critical supersaturation, self-nucleation becomes very slow. If the concentration is above the supersaturation but below the metastable limit, a crystal introduced into this supersaturated solvent gas will grow. With this simple approximation, eqs 3a and 3b become solvable for linear stability and allow nonlinear simulation. One can show that if d ln B/d ln G > 21,48,49 the system becomes linearly unstable. Dynamic non-

linear simulation at unstable conditions shows the existence of a limit cycle in particle size. The form of the nonlinear limiting cycle, the timescale of the limiting cycle, and the cycle time is very similar to actually observed industrial behavior and so were the observed size distributions. This gave us a false confidence of the correctness of the model. The learning model provided critical information for design and control. Instabilities and dynamic behavior in processes with simultaneous nucleation and growth occur as a result of an internal feedback loop between the nucleation and the properties of the crystal magma (the suspension of solids in the solution). This feedback process can be explained by this simple learning model and is really all we need to come up with a good control. At equilibrium, there is a steady production of nuclei, followed by growth and removal as crystalline product. However, under certain operating conditions,8,9,47 the nucleation rate is extremely sensitive to positive deviations from the equilibrium solute concentration, leading, in the short term, to a significantly large production of nuclei. Because the nucleated crystals are small in size, there is a significant time lag before they grow large enough to impact the total surface and thereby supersaturation. At this point, further growth results in consumption of the solute and reduction of the supersaturation, thereby leading to a drop in the nucleation rate B(‚), which depends on the supersaturation. Continuous withdrawal of the crystalline product reduces the available area for growth, and continuous addition of fresh feed raises the solute concentration. This gives rise to a large number of nuclei and repetition of the cycle. This instability manifests in the form of cyclic variations in the mean crystal size, leading to production of a nonuniform, off-spec crystalline product. A large sudden amount of small crystals can clog the filters and sieves, causing severe production problems. One can infer that the nucleation rate B(‚) is the single process variable that has a direct impact on stability and achievable product particle size distribution. B(‚) can be considered a dominant variable for the control. Unfortunately, it is not possible to directly measure B(‚), nor is it possible to identify the functional form of B(‚). However, variations in B(.) lead to varying amounts of nuclei. Because it takes time for the fines to grow, controlling the fines population is all we need to break this internal feedback loop and stabilize the process. The concentration of small crystals, or fines, below a certain critical size rc, which we call cfines, can be used as a good inferential estimate for B(‚). By directly controlling cfines without direct interaction with other process variables, one can stabilize the crystallizer and directly control the achievable particle size distribution f(r,t), which determines the crystal product properties in the performance vector yˆ p. One such method for independent control of yˆ p is by a nuclei or “fines” trap,47,50 as shown in Figure 11. Only fine particles of less than a certain size reach the top of the baffle where a stream is drawn off to a heater at a flow rate ω0, while large particles settle at the bottom. By measuring the concentration and size of the fines in the primary fines control loop (by light adsorption or using a particle counter), one can adjust the flow ω0 through the trap, thereby regulating the effective nucleation rate. The fines are then dissolved by heating, and the clear solution is recycled to the crystallizer. The setpoint cfines, for the fines concentration in the supervisory

264

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

Figure 11. Partial control of a crystallizer with a fines trap.

control, is derived from sampled data measurements of the product property vector yˆ p and adjusted in the steady-state secondary control loop (not shown in the figure). Thus, by controlling the fines concentration, as an inferential estimator for the nucleation rate, and using the fines flow rate ω0 through the nuclei trap, we are able to decouple the interactions between B(‚) and the magma in the system and directly control the particle size distribution f(r,t), despite a high model uncertainty. Once B(‚) is independently controlled, the crystallizer becomes reliably modelable. The remaining manipulated variables, i.e., feed rate ω, feed temperature To, feed concentration c0, and crystallizer temperature T, can be used to achieve good control of the performance vector yˆ p. This crystallizer control scheme is patented47 and used commercially. In that sense this learning model provided significant information for design and control. It identified the dominant variable for stability and nonlinearity of B(‚), which here is a rate (we noted before that a dominant variable can be an internal flow rate or a state variable). At fixed B(‚) the impact of all process variables on yˆ p can be estimated from experiments within the laboratory or on the unit. However, our first model48 does not provide us any correlation for relating B(‚) to the process conditions. It took me a long time to realize this, by observing and analyzing actual crystallizer results, but for over 30 years this model has been used for simulations and other theoretical studies with little relation to reality. However, some of the more complex models,9 which include f(r,t) in both B(‚) and G(‚) in a simplified form, confirm that once B(‚) is independently controlled, one can use the simple model as the basis of a correlation structure. This clearly illustrates the role of simple learning models in design and control. Let me just elaborate two critical features why this model48 is inadequate. (1) An ideal stirred tank is used.8 We realized early that this assumption conflicts with industrial experience. A stirred tank model has only one physical parameter, residence time. The length scale is irrelevant. However, instability in crystallizers is strongly scale-dependent for geometrically identical designs. Crystallization processes that were asymptotically stable in the pilot plant exhibit strong limiting cycles in the large scale. This is expected from the already cited “Shinnar-Darwin” principle that processes unstable in the laboratory or pilot plant are seldom built (at least not by an experienced manager). There is a simple physical explanation as to why the stability is sizedependent. A stirred tank is a useful approximation, for most processes. It gives reasonable results as long as the mixing time in the microscale is small compared to the residence time. In most processes very little occurs

in such a short time, but if the process is highly nonlinear, even small deviations from the ideal can have a big effect. However, when we looked24,51 at the effect quantitatively for many different nonlinear systems, only nucleation was nonlinear enough to have a strong effect even for cases where the mixing time was quite small compared to the residence time. In a stirred tank, at constant energy input per unit volume, the mixing time increases linearly with scale.52 Especially in a crystallizer, fast mixing is not feasible because it will break the crystals. If the process is sensitive to mixing, it will also be sensitive to scale. Therefore, models obtained in a small pilot plant will not be correct for the large plant. We will later discuss how to take this into account in the design. (2) The models proposed8,48 are very illuminating as learning models but not for any quantitative prediction. The nucleation model is a simple function of supersaturation but is physically not feasible in most crystallizers. In 1976, we published a paper9 showing that our early simplified model did not fit the detailed results of most industrial crystallizers and proposed an alternative model. Newly formed small crystals are unstable and quickly agglomerate to a stable critical size. Formation of new stable particles is controlled by agglomeration of smaller crystals competing with the capture of unstable small particles by larger stable crystals. Experimental proof of such a case has been provided.53 This model has completely different stability and dynamic behavior although the time scale and forms of limiting cycles are the same. Recent work on the crystallization of nanoparticles shows that very small nanoparticles (less than 0.1 µm) are very unstable unless stabilized by surface protective agents.45 This makes the nucleation model8,9 very unlikely to apply for crystallization from solution. Still, there has been a large literature49,54 doing detailed modeling, even nonlinear simulation,55 using our simplified model and trying to get much more information from it than it contains. I want here to point out that in no way I claim that using the simplified model8 will give accurate results; it just illuminates the effect of agglomeration on the dynamic behavior. This proves again one paradigm of mine4 that one cannot and should not distinguish between alternative models for processes on a microscale from experiments on a macroscale. The fact that the simple proposed models8,9 are not accurate enough for reliable detailed modeling does not detract from their usefulness as learning models. What we can learn from them is sufficient to derive reliable scalable designs that provide stable and controllable operation. Let us summarize these lessons. (a) The simple model provides a physical understanding of the mechanism of the limiting cycles and nonlinear behavior. In our design methodology, all one needs to know is what drives the internal feedback process and what is the dominant variable driving the internal feedback. All of the models show that the dominant variable is the nucleation rate or, in a wider sense, the rate at which new stable particles are formed. By controlling this rate, one can physically linearize the process and reliably stabilize it. In the next section, we discuss different ways to do that. (b) In all of those models, supersaturation, which has a strong impact on nucleation, plays a significant role. (c) Using a classified outlet for a stirred tank destabilizes the crystallizer.48 Seeding the crystallizer with

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 265

small particles is a way to control B(‚), in a modelindependent way. (d) Even more important, the model shows8,48 that larger particles have a dominant role in the feedback process. One can then arbitrarily define a “nucleus” as a particle of a specific diameter, reasonably large but less than a fifth of the average particle diameter. This is very important for control and design because it allows defining as nuclei particles that are observable and therefore measurable. Thus, they can be added to the system or removed from it. This conclusion can be shown to be quite model-insensitive. V.2.C. Designing Crystallizers for Efficient Control of the Nucleation. The main result of the learning models of crystallizers is the identification of the nucleation rate as the only dominant variable that drives nonlinearities and instabilities. Furthermore, by control of the nucleation rate, the system becomes modelable, and we can control the particle size and many other properties. Also, if we can control the rate of nucleation to a desired value, we can study the impact of all other variables in the laboratory and get the required information for the control not only of particle size but also of other desired properties of the product. Finally, if the crystal has different polymorphs, one can use seeds to control the form of new nuclei. V.2.D. Design of a Stirred Crystallizer with a Nuclei Trap. The design is described in Figure 11. The crystallizer is equipped with an elutriation duct through which the crystal magma is pumped at a specific adjustable rate. The flow is adjustable such that only crystals below a critical diameter can reach the outlet of the elutriation duct. The dispersion is passed through a device (a heater or a filter) that removes or destroys the small particles. One can keep this number at a constant desired value by using the circulation rate through the elutriation as a manipulated variable and an inferential estimate of the concentration of small particles as a setpoint variable. It works quite well for physical linearization and stabilization of the crystallizers. The setpoint itself is very useful in the supervisory loop. There are various ways to get an inferential estimate for the concentration of small particles in the recirculating magma. One is light absorption, and another is to get sampled data from a particle canister. V.2.E. Direct Control by Seeding. The best direct control of the nucleation rate is to decouple the nucleation control from the crystallization and provide the system with desired seed crystals. In addition, one has to keep the supersaturation low enough to prevent or minimize self-nucleation. To ensure this, one can use seed particles large enough such that particles formed by self-nucleation can be constantly removed from the crystallizer by elutriation. This is the most powerful control of nucleation though it is not always feasible. It allows one to control not only B(‚) but also crystal habit. Thus, for example, Merck developed a process that separates two chiral stereoisomers by crystallizing in two communicating crystallizers, each seeded with a different stereoisomer. To minimize the expense of providing seeds, one can generate them in the crystallizer by grinding a fraction of the product, preferably in a separate recycle loop. V.2.F. Using a Special Nucleation or Seed Generation Reactor. As mentioned before, excess nucleation occurs in the fresh feed before it completely mixes. We can use this fact in a small nucleation reactor where

part or the whole feed is used to generate small seed crystals. Learning models show that8 it is much easier to obtain stable operation with small crystals and short residence times compared to the production of large crystals. We can produce such a dispersion of small crystals in a precrystallizer and feed them to the reactor as seeds. This introduces some additional manipulated variables. Over the fraction of the feed fed to the nucleation reactor, and if desired, one could add a reflux flow from the crystallizer itself.22,56 The main crystallizer is then kept at conditions that minimize self-nucleation. Thus, these concepts of controlling the dominant variable for nonlinearity and stability allow one to make changes that minimize model uncertainty and allow for safe design and control with minimum model information. Understanding the dominant variables and how to independently control them is an absolutely essential minimum for design and control. V.2.G. Control Schemes for the Crystallizer. We will go here into detail because this is quite straightforward and simple and depends on the crystallizer. In most cases, dominant variables identifiable in the laboratory are supersaturation, which is controllable by feed addition, temperature, space velocity (or residence time), feed composition, purge, etc., to control impurities and concentration of the impurities, all of which are directly and independently controllable, but only if B(‚) is first independently controlled, will one be able to estimate their impact on yˆ p. All of the above methods have been demonstrated successfully in industry in many different designs; some (especially production) can occur in a dispersion polymerization. I have experience with quite a number of cases. In no way do I want to imply that one always needs direct control of the nucleation rate or any dynamic control. For at least 10 000 years, many crystallizers were operated by manual control, without a model or measurements aside from particle size, and there are many processes doing so today. In fact, most reactors that have limiting cycles can be controlled by adjusting operating conditions to either obtain stable operation or at least keep limit cycles at an acceptable low amplitude. Control tends to either amplify such lowlevel cycles or, if it is sampled data operator control, distort them to look like stochastic disturbances. VI. Summary and Discussion The paper presents a method for the development, design, and control of complex nonlinear processes with emphasis on new chemical processes. It is a concurrent design method because control considerations as well as design considerations are an integral part already at the stage of early development. Control is used here not just to compensate for disturbances or changes in the specifications but also to compensate for the model and scale-up uncertainties of any new processes. For efficient development, it is important that design as well as economic considerations play a significant role in the early stages of laboratory investigations. This allows one to focus on obtaining the necessary information for the design and control as well as evaluating the economic viability of the process. In doing so, it results in far safer scale-up, cheaper and much faster process development, and a better design than that from building a large pilot plant. Not only are pilot plants costly to build and

266

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

operate and take a long time, but both their expense and their limitations do not allow operation over a large range of conditions, thus limiting our ability to get proper nonlinear data for the dominant variables. Obtaining such data in the laboratory is much easier, cheaper, and more accurate because one can decouple the impact of different variables. Such large pilot plants are often run just to confirm the ability of the process to meet specifications. However, this is not enough for reliable design because large plants can, for the same operating conditions, give significantly different results. For most processes, pilot plants are totally unnecessary. There are some important exceptions: processes involving hot sticky solids and processes sensitive to the intensity of mixing. In the latter case, proper design can minimize this need. For these cases, it is hard to compensate for the uncertainties by solely control. However, for the majority of properly designed processes, such compensation is possible, and our design method is safer and more reliable than large pilot plants. An example for such a process, the laminar flame TiO2 process, was briefly discussed. I have often met, in practice, scale-up proposals using large nozzles. None of them were able to scale up properly. The Department of Energy wanted to develop fluid-bed coal gasifiers with a large central nozzle in a 10-ft fluid bed in a process extremely sensitive to mixing. Luckily, it was never built. I saw other examples which for proprietary reasons I cannot discuss. In all of these cases, it is easy to solve the problem by a multiple-nozzle design. There are also processes in which large pilot plants or demonstration plants can be avoided not by a different design but by choice of a different process. One example is discussed by the author.44 It dealt with a comparison of indirect coal liquefaction by first gasifying the coal and then converting the syngas to gasoline and diesel versus direct liquefaction by reacting coal with H2 under high pressure and then hydrocracking the products. The indirect liquefaction, contrary to intuition and some cost estimates, not only was cheaper but gave a much better product, and it had another critical advantage. Coal gasification was a proven process and operated on a large scale. Conversion of syngas is a simple, easy to scale gaseous process. Liquefaction involves hot sticky solids, and even a large pilot plant is not enough. One needs a large expensive demonstration plant to operate for a long time before one can build several plants. To do so, it requires 7-10 years and costs many billions of dollars. This gave indirect gasification a very large advantage, recognized by several major oil companies. The design method given in the paper requires a thorough understanding of the chemistry, kinetics, and physics of the process. While the method has a structure and a well-worked-out procedure, it is not based on any algorithms or optimizations. It extensively uses modeling and available algorithms but only as auxiliary tools; especially the large literature on learning models4,6,7 is essential to minimize development costs. Well-designed learning models allow one to identify potential mechanisms of instability and to protect against them. Such models provide information on potentially dominant variables and are useful in defining the scale-up risks and how to minimize them by choosing a proper design. One costly mistake that I have noted in many process developments I have consulted on is an overdue em-

Figure 12. Outline of process and reactor design for a new process.

phasis on capital cost. It is not that cost is not important, but this omits two important aspects that I took a long time to learn. (a) The novel elements of a new process are almost always either in the reactor or in the separation process. Seldom are both novel. The unknown elements in many cases represent no more than 10-20% of the total project cost. Therefore, it pays to always focus on maximizing the yields and reducing the risks. (b) The second common mistake is to focus, during preliminary design and process choice, on the construction cost of the process or unit, without taking into account that the development cost as well as the development time can be strongly different for different processes or designs. I was once involved in a very large project, where an attempt was made to replace a wellworking reactor with a cheaper design. The total development cost for the new design was 3 times larger than the total potential savings on a single worldscale plant. The methodology is presented in detail using two examples, which show how it applies to a whole range of processes. The methodology is summarized in Figure 12. There are several aspects of our method that are in my opinion not sufficiently stressed in the current literature. (1) The importance of specifications in design on development and control is not emphasized. (2) The use of models and optimization is overemphasized because at the time of design model uncertainty is often high.

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 267

(3) The design and control should make maximum use of the direct information obtained in the laboratory. The relation between outputs and dominant variables directly and independently measured in the laboratory is much more reliable than the relation between those outputs and state variables estimated from the model. This is also important for the structure of efficient correlation models either from the laboratory or directly obtained in the unit. (4) The control structure is really determined at the design stage because it depends on manipulated variables and their range, which have to be fixed at the time of design. The variables chosen for setpoints should be the dominant process variables studied in the laboratory, and loops should be chosen to allow fast and direct control of the dominant variables. Therefore, the emphasis on control has to start during the development, before the detailed design. (5) Dynamic control is essential for stability and disturbance rejection but seldom critical for profit and meeting of specifications. The critical control is the slow nonlinear supervisory control, which uses the setpoints of the dynamic control and variables controlled outside the unit as manipulated variables. Therefore, one cannot design a control system by just looking at the linear control matrix of the dynamic control in an isolated way. One also has to take into account the following: (a) To stabilize the total system, which is nonlinear, the variables dominant for nonlinearity have to be controlled in a very narrow range. This is not just a control problem but also a design problem because this capability depends on the available manipulated variables. Luckily, these variables are also important setpoints for the supervisory control. (b) The other setpoints in the dynamic control have to be chosen to be useful for the supervisory control, and the correlation between them and the specifications has to be clearly predictable; especially, the qualitative relation has to be known in a model-insensitive way. (c) One has to remember that the dynamic control is only the linear part of an overall nonlinear control system. In the concurrent design, the overall control is the sole concern during the design of the process. However, once the system is designed, operating linear control theory can make a major contribution to improve the control. While the setpoints are fixed, the linear control theory can not only improve the performance of the individual loops but also minimize interactions between the loops. Finally, it can utilize additional measurements to improve the dynamic behavior (feed forward, cascaded, and internal model control, etc.) The two examples chosen were used to illustrate the application of the method, and the failures that can occur when these items are neglected. I feel free to say so because some of the failures discussed were also committed by me. I want to make one last comment. There is another wrong trend that I observed in my career, which is the compartmentalization of the design function. Development, pilot planting, design, control, and economics have become separate consecutive functions instead of being concurrent integrated activities. I have seen many cases where this not only increased cost but also caused failures. I have also seen a number of cases where after successful pilot planting it was discovered that the process has no economic advantages or is noncompeti-

tive with available technologies. In each case, an engineer competent in effective first cost evaluation could have seen this in the exploratory stage. Because one cannot afford a detailed design and cost estimation at the early stage, one needs to develop engineers who effectively can do this quickly with modern comparative evaluation methods.42-44 In this paper, I only dealt with the integration of development design and control, but economic considerations are essential in any design. However, integrating economic consideration is essential at every step of the design and requires training of engineers to do this effectively and think that way. Because companies are now starting to focus on the cost and efficiency of research and engineering, it is time that the profession relooks at this problem and starts to seriously look at how development and design can be done much more efficiently and cheaper, without reducing the reliability of the scale-up. This is feasible and should become a central concern of our profession. Notation A ) inherent catalyst activity B ) nucleation rate per volume of solution c ) solute concentration in the crystallizer cfines ) concentration of fine crystals cm ) metastable concentration cs ) solubility concentration of the solute c0 ) solute concentration in the feed Creg ) coke on regenerated catalyst Cspent ) coke on spent catalyst E ) activation energy f(r,t) ) particle size distribution Fair ) air flow rate to the regenerator Fcat ) catalyst circulation rate Ffeed ) oil feed flow rate G ) crystal growth rate kB* ) kinetic constant of the nucleation rate kG* ) kinetic constant of the growth rate n ) exponent in nucleation function P ) pressure P(O2) ) partial pressure of oxygen r ) characteristic radius of the crystal rc ) nuclei trap cutoff size R ) gas constant t ) time To ) crystallizer feed temperature T0 ) steady-state temperature in an exothermic reaction Tfeed ) oil feed temperature Tfg ) flue gas temperature Tmix ) temperature at the reactor bottom after feed and catalyst mixing Trea ) reactor top temperature Treg ) regenerator dense bed temperature V ) crystallizer working volume yˆ p ) vector of process outputs ypi ) output variables, product properties ypi(reachable) ) reachable space of the process ypi(spec) ) desired specification space δ(r) ) delta Dirac function ∆Tcyc ) temperature drop across the cyclones ∆T* ) maximum allowable temperature differential in an exothermic reaction  ) fractional volume of the solution θ ) crystallization draw-down time ) V/ω F ) crystal density ω ) crystallizer volumetric feed and/or withdrawal flow rate ω0 ) draw-off stream to a heater containing fine crystals

268

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004

Literature Cited (1) Shinnar, R.; Rinard, I. H.; Dainson, B. Partial Control. 5. A Systematic approach to the Concurrent Design and Scale-up of Complex Processes: The Role of Control System Design in Compensating for Significant Model Uncertainties. Ind. Eng. Chem. Res. 2000, 39 (1), 103-121. (2) Kothare, M. V.; Shinnar, R.; Rinard, I. H.; Morari, M. On Defining the Partial Control Problem: Concept and Examples. AIChE J. 2000, 46 (12), 2456-2474. (3) Shinnar, R. The Role of Control in the Design and ScaleUp of Complex Chemical Processes. Invited Plenum Lecture given at the IFAC Conference (International Federation of Automatic Control), Korea, Jun 2001. (4) Shinnar, R. Chemical Reactor Modelling. The desirable and the achievable. Chem. React. Eng. Rev. 1978, 29, 1-36. (5) Shinnar, R. Chemical Reactor Modelling for Pourposes of Controller Design. Chem. Eng. Commun. 1981, 9, 73-99. (6) Aris, R. Mathematical Modeling: A Chemical Engineer’s Perspective (Process Systems Engineering); Academic Press: San Diego, 1999; Vol. 1. (7) Aris, R.; Amundson, N. R. An Analysis of Chemical Reactor Stability and Control. Chem. Eng. Sci. 1958, 8, 121-155. (8) Sherwin, M.; Shinnar, R.; Katz, S. Dynamic Behaviour of the Well-Mixed Isothermal Crystallizer. AIChE J. 1967, 13, 1141. (9) Liss, B.; Shinnar, R. The Dynamic Behavior of Crystallizers in which Nucleation and Growth Depend on Properties of the Crystal Magma. AIChE Symp. Ser. 1976, 72 (153), 28. (10) Avidan, A.; Edwards, M.; Owen, H. Innovative Improvements Highlight FCC’s Past and Future. Oil Gas J. 1990, 88 (2), 33-58. (11) Avidan, A.; Shinnar, R. Development of Catalytic Cracker Technology. A Lesson in Chemical Reactor Design. Ind. Eng. Chem. Res. 1990, 29 (6), 931-942. (12) Arbel, A.; Huang, Z.; Rinard, I. H.; Shinnar, R.; Sapre, A. V. Dynamics and Control of Fluidized Catalytic Crackers. 1. Modeling of the Current Generation of FCC’s. Ind. Eng. Chem. Res. 1995, 34 (4), 1228-1243. (13) Jacob, S. M.; Gross, B.; Voltz, S. E.; Weekman, V. M., Jr. A Lumping and Reaction Scheme for Catalytic Cracking. AIChE J. 1976, 22 (4), 701-713. (14) Shinnar, R.; Snyder, P. W., Jr.; Weekman, V. W., Jr. Combustion Regeneration of Hydrocarbon Conversion Catalyst With Recycle of High-Temperature Regenerated Catalyst. U.S. Patent 3,970,587, 1976. (15) Gould, L. A.; Evans, L. B.; Kurihara, H. Optimal Control of Fluid Catalytic Cracking Processes. Automatica 1970, 6 (5), 695-703. (16) Lee, W.; Weekman, V. Advanced Control Practice in the Chemical Proce Industry. AIChE J. 1976, 22, 27. (17) Hovd, M.; Skogestad, S. Procedure for Regulatory Control Structure Selection with Application to the FCC Process. AIChE J. 1993, 39 (12), 1938-1953. (18) Balchen, J. G.; Ljungquist, D.; Strand, S. State-Space Predictive Control. Chem. Eng. Sci. 1992, 47 (4), 787-807. (19) Arbel, A.; Rinard, I. H.; Shinnar, R. Dynamics and Control of Fluidized Catalytic Crackers. 3. Designing the Control System: Choice of Manipulated and Measured Variables for Partial Control. Ind. Eng. Chem. Res. 1996, 35 (7), 2215-2233. (20) Rosenbrock, H. H. The Future of Control. Plenary Papers IFAC Symposium; IFAC: Boston, 1975. (21) Kestenbaum, A.; Shinnar, R.; Thau, F. Design Concepts for Process Control. Ind. Eng. Chem. Process Des. Dev. 1976, 13, 2. (22) Shinnar, R. Impact of Model Uncertainties and Nonlinearities on Modern Controller Design. In Proceedings of the 3rd International Conference on Process Control; Morari, M., McAvoy, T., Eds.; Elsevier: New York, 1986. (23) Shinnar, R. Use of Residence and Contact Time Distributions in Reactor Design. In Chemical Reaction and Reactor Engineering; Carberry, J. J., Varma, A., Eds.; Marcel Dekker: New York, 1986. (24) Shinnar, R. Residence-Time Distributions and Tracer Experiments in Chemical Reactor Design: The Power and Usefulness of a “Wrong” Concept. Chemical Engineering Reviews; Freund Publishing House Ltd.: 1993; Vol. 9, Nos. 1-2.

(25) Evangelista, J. J.; Katz, S.; Shinnar, R. Scale-up Criteria for Stirred Tank Reactors. AIChE J. 1969, 15 (6), 843-853. (26) Arbel, A.; Rinard, I. H.; Shinnar, R.; Sapre, A. V. Dynamics and Control of Fluidized Catalytic Crackers. 2. Multiple Steady State and Instabilities. Ind. Eng. Chem. Res. 1995, 34 (9), 30143026. (27) Balakotaiah, V.; Luss, D. Global Analysis of the Multiplicity Features of Multi-Reaction Lumped-Parameter Systems. Chem. Eng. Sci. 1984, 39 (5), 865-881. (28) Razon, L. F.; Schmitz, R. A. Multiplicities and Instabilities in Chemically Reacting SystemssA Review. Chem. Eng. Sci. 1987, 42 (5), 1005-1047. (29) Shinnar, R.; Doyle, F. J., III; Budman, M. H.; Morari, M. Design Considerations for Tubular Reactors with Highly Exothermic Reactions. AIChE J. 1992, 38, 1729. (30) Wang, P.; McAvoy, T. Synthesis of Plantwide Control Systems Using a Dynamic Model and Optimization. Ind. Eng. Chem. Res. 2001, 40, 5732-5742. (31) Loeblein, C.; Perkins, J. D. Structural Design of On-line Process Optimization Systems. AIChE J. 1999, 45, 1018-1040. (32) Heath, J. A.; Kookos, I. K.; Perkins, J. D. Process control structure selection based on economics. AIChE J. 2000, 46, 19982016. (33) Morari, M.; Zafiriou, E. Robust Process Control; Prentice Hall: Englewood Cliffs, NJ, 1989. (34) Stephanopoulos, G. Chemical Process Control. An Introduction to Theory and Practice; PTR Prentice Hall: Englewood Cliffs, NJ, 1984. (35) Ogunnaike, B. A.; Ray, H. W. Process Dynamics, Modeling, and Control; Oxford University Press: New York, 1994. (36) Hicks, R. C.; Worrel, G. R.; Durney, R. J. Atlantic Seeks Improved Control; Studies Analog-Digital Models. Oil Gas J. 1966, 24, 97. (37) Krambeck, F. J.; Avidan, A. A.; Lee, C. K.; Lo, M. N. Predecting Fluid-Bed Reactor Efficiency Using Adsorbing Gas Tracer. AIChE J. 1987, 33 (10), 1727. (38) Shinnar, R.; Rumschitzki, D. Tracer Experiments in Heterogenous Chemical Reactor Design. AIChE J. 1989, 35, 1651. (39) Elnashaie, S. S. E. H.; El-Hennawi, I. M. Multiplicity of the Steady State in Fluidized Bed ReactorssIV. Fluid Catalytic Cracking (FCC). Chem. Eng. Sci. 1979, 34, 1113-1121. (40) Elshishini, S. S.; Elnashaie, S. S. E. H. Digital Simulation of Industrial Fluid Catalytic Cracking Units: Bifurcation and Its Implications. Chem. Eng. Sci. 1990, 45, 553-559. (41) Edwards, W. M.; Kim, H. N. Multiple Steady State in FCCU Operations. Chem. Eng. Sci. 1988, 43 (8), 1825-1830. (42) Shinnar, R.; Fortuna, G.; Shapira, D. Use of Nuclear Energy in Production of Synthesis Natural Gas and Hydrogen from Coal. Ind. Eng. Chem. Process Des. Dev. 1984, 23, 183. (43) Shinnar, R.; Shapira D.; Zakai, S. Thermochemical and Hybrid Cycles for Hydrogen Production. A Differential Economic Comparison with Electrolysis. Ind. Eng. Chem. Process Des. Dev. 1981, 20, 581. (44) Shinnar, R. Differential Economic Analysis. Gasoline from Coal. CHEMTECH 1978, 8, 686-693. (45) Singhal, A.; Skandan, G.; Wang, A.; Glumac, N.; Kear, B. H.; Hunt, R. D. On Nanoparticles Aggregation During Vapor Phase Synthesis. Nanostruct. Mater. 1999, 11 (4), 545-552. (46) Katz, S.; Shinnar, R. Polymerization Kinetics and Reactor Design. Chemical Reaction Engineering; Advances in Chemistry Series 109; American Chemical Society: Washington, DC, 1972. (47) Lei, S. J.; Shinnar, R.; Katz, S. The Stability and Dynamic Behavior of a Continuous Crystallizer with Fines Trap. AIChE J. 1971, 17, 1459. (48) Sherwin, M.; Shinnar, R.; Katz, S. Dynamic Behavior of the Isotheral Well Stirred Crystallizer with Classified Outlet. Chem. Eng. Prog. Symp. Ser. 1969, 65, 75-90. (49) Randolph, A. D.; Larson, M. A. Theory of Particulate Processes, 2nd ed.; Academic Press: New York, 1988. (50) Shinnar, R. Control Systems for Crystallizer. U.S. Patent 3,649,782, 1972. (51) Evangelista, J. J.; Shinnar, R.; Katz, S. The Effect of Imperfect Mixing on Stirred Combustion Reactors. 12th Symposium (International) on Combustion; The Combustion Institute: Pittsburgh, 1969.

Ind. Eng. Chem. Res., Vol. 43, No. 2, 2004 269 (52) Shinnar, R. On the Behavior of Liquid Dispersions in Mixing Vessels. J. Fluid Mech. 1961, 10, Part 2, 259. (53) Glassner, A. The Mechanism of Crystallization; a Revision of Concepts. Mater. Res. Bull. 1973, 8, 413-422. (54) Randolph, A. D. Design, Control, and Analysis of Crystallization Processes; AIChE Symposium Series 193; AIChE: New York, 1980; p 76. (55) Rawlings, J. B.; Miller, S. M.; Witkowski, W. R. Model Identification and Control of Solution Crystallization Processess A Review. Ind. Eng. Chem. Res. 1993, 32 (7), 1275-1296.

(56) Citro, F.; Galdi, M. R.; Shinnar, R. The Limitation on Modeling Chemical Reactors for Scale-up and Control. Design and Control of a Continuous Crystallizer. In press.

Received for review June 2, 2003 Revised manuscript received October 17, 2003 Accepted October 27, 2003 IE0304715