x = Hamiltonian ki,, k f = pre-exponential factors and rate constants, i = 1,2 K = feedback gain in iteration algorithm based on second variations m = control vector p = costate vector P = matrix of second-order costate variables R = ideal gas constant R = matrix defined in Equation 10A S = first partial derivative of x t = time variable = initial and final time to, t / T = second partial derivative matrix, Equation 10A X = state vector 6x, 6m = incremental vectors e = step size parameter ( )(C = ith iteration, omitted whenever no confusion can arise (A) = optimum trajectory (.) = time derivative literature Cited Berkovitz. L. D.. J . Math. Anal. Abbl. 3. 145 (1961). Bliss, G. A., “Lectures on the C.&ulub of Variatibns,?) University
of Chicago Press, Chicago, 1946. Breakwell, J. V., Ho, Y. C., Intern. J . Eng. Sci. 2, 565 (1965). Breakwell, J. V., Speyer, J. L., Brvson, A. E., S.I.A.M. J . Control, Ser. A, 1,193 (1963). Denham, W. F., Bryson, A. E., A.I.A.A. J . 2,25 (1964). Denn, M . M., Aris, R., IND.ENG.CHEM.FUNDAMENTALS 4, 213 (1965).
Fletcher, R., Powell, M . J. D., Computer J . 6, 163 (1963). Hestenes, M. R., Rand Corp., Santa Monica, Calif.,. Rept. - RM102 (1949). Horn. F.. Troltenier. U.. Chem. Ine. Tech. 32.382 (1960). ’ Kalm.an,’R. E., Bol..Soc.’Math. M&. 5 , 102 (1960). Kelley, H. J., I.R.E. Trans. Autom. Control AG7, 75 (1962a). Kelley, H. J., in “Optimization Techniques,” G. Leitmann, ed., Academic Press, New York, 1962b. Kelley, H. J., Kopp, R. E., Moyer, H. G., “Trajectory Optimization Technique Based upon the Theory of Second Variations,” AIAA Astrodynamics Conference, New Haven, Conn., 1963. Kopp, R. E., Moyer, H. G., McGill, R., Pinkham, G., in “Computing Methods in Optimization Problems,” A. V. Balakrishnan and L. W. Neustadt, eds., Academic Press, New York, 1964. Lee, E. S., A.I.Ch.E. J . 10, 309 (1964a). 3, 373 (1964b). Lee, E. S., IND.ENG.CHEM.FUNDAMENTALS Luus, R., Lapidus, L., “Control of Nonlinear Systems. Convergence by Combined First and Second Variations,” A.I.Ch.E. J., in press. Merriam, C. W. 111, Information and Control 8, 215 (1965). Merriam, C. W. 111, “Optimization Theory and the Design of Feedback Control Systems,” Chap. 10, McGraw-Hill, New York, 1964. Pontryagin, L. S., et al., “The Mathematical Theory of Optimal Processes,” Interscience, New York, 1962. Sharmack, D. D., Proceedings of Optimum System Synthesis Conference, Wright-Patterson Air Force Base, Dayton, Ohio, Rept. ASD-TDR-63-119 (1963). Storey, C., Chem. Eng. Sci. 17,45 (1962).
RECEIVED for review April 25, 1966 ACCEPTED December 19, 1966
SECOND-VARIATIONAL METHODS FOR OPTIMIZATION OF DISCRETE SYSTEMS F. A. F I N E ’ A N D S. G. BANKOFF
Chemical Engineering Department, Northwestern University, Evanston, Ill.
A second-order control vector relaxation procedure, given by Merriam for optimization of continuous systems, is extended to discrete systems. As an example, the problem of the optimal temperature sequence in a series of stirred tank reactors is treated, and the results are compared with the continuous case.
due to Merriam (1964, 1965), based upon the classical accessory minimization problem of Jacobi, as discussed by Bliss (1961), has been applied to the optimization of some continuous chemical processes (Fine and Bankoff, 1967), in contrast to control vector iteration methods based upon first-order variations alone, such as functional gradient techniques. This technique involves second variations, and leads to a control vector increment which is a function of the state vector increment. Hence it takes into account the perturbation of the solution surface by the control variation in preceding time intervals. Given a suitably close primary trajectory, which may be obtained by gradient methods, this feedback control optimization was shown to converge rapidly upon the optimal trajectory. From another point of view, the feedback algorithm is a quasi-optimum control formula, which allows suboptimal trajectories to be readily constructed
A
1
N ALGORITHM
At present serving with the French Army.
from the optimal trajectory in order to compensate for variations in feed composition. I n the present work the method is extended to discrete systems, the resulting algorithm being entirely analogous to that for continuous systems. Nevertheless, this result is not an obvious one, since a direct treatment of the equations used to derive the continuous algorithm leads to second-order terms which cannot be made arbitrarily small unless the number of stages becomes arbitrarily large (Fine, 1965). The correct approach stems from an expansion of the discrete analog of the Hamilton-Jacobi equation, illustrating again the care which must be exercised in passing from optimal control of continuous systems to discrete systems. Some applications to sequences of stirred tank reactors are given, which very rapidly approach the continuous optimal policy as the number of stages increases. The theory of optimal control for discrete systems with bounded control only has recently been investigated by Jordan VOL. 6
NO. 2
MAY 1967
293
and Polak (1964) and Halkin (1964), and with bounded state variables in addition, by Rosen (1964, 1966). Such constraints are not considered at the present time, but may be represented by penalty functions included in t, and F(X*).
n=l
Consequently, the condition e ( f + l ) < e(') is satisfied by choosing
Statement of Problem
Let x,e E P represent the state vector for the effluent of the nth stage of an N-stage process, and m,e E' be the corresponding control vector. It is required to determine vectors x, and m,, so as to minimize the error index N
e =
C tn(xn-l,mn)
n=l
+
(1)
F(xN)
subject to the iteration condition xn
- x,-1
6m, =
= fn(xn-l,mn); n = 1,
...N
(2)
with the specified initial condition, xo = a. I t is assumed only that t , and f , e O on EpxEq, and FECIon EP. A minor alteration of a proof by Rosen shows that the necessary KuhnTucker conditions for a relative minimum become in this case (vector-matrix notation is defined in the Nomenclature) :
pn
- pn-1
= VxHn;
PN
=
(3)
V ~ ( X N )
where the Hamiltonian is defined by ffn(xn-1, mn, pn)
tn
+ pn'
(4)
fn
and vmH, = 0
where e, > 0 determines the step size. This development follows almost directly that of Merriam for the continuous case, and the same considerations in the selection of the step size apply to the discrete case. I n particular, two alternative algorithms can be chosen for 6m,, corresponding to gradient and Newton-Raphson methods:
n = 1, 2,
. . ., N
It can also be shown, by a slight extension of Rosen's arguments, that if t , is convex on E*xE4, n = 1 , . . . N, F(xN) is convex on E P , and fn(xn-l, m,) is linear on EpxEc, n = 1, . . .N, these conditions are sufficient for a global minimum. To solve this set of equations by a first-order relaxation procedure, we ignore Equation 4, so that x, and m, are now two independent variables. Now, knowing rnn@),x,@), pncf),n = 1, 2, . . . N , from the ith iteration, the problem is to determine a set of N controls rn,($+l), n = 1, 2, , . . N , such that
-E,
Vm Hn;
am, =
En[Vmm HnI-lVmHn
(5b, C)
or combinations of these three algorithms can be used. Relaxation Procedure Based on Second Variations
The relaxation procedure based on second variations involves a second-order Taylor series expansion of e ( l + l ) . As noted by Merriam, the problem of selecting an efficient 6m, for a high rate of convergence is, in itself, a feedback control optimization problem. Hence, a quasi-optimum feedback control equation is first derived, which will determine, in part, the form of the iteration equation occurring in the relaxation procedure. Specifically, the parameters used to define the quasi-optimum control equation will be identified with the adjoint variables introduced in the relaxation procedure, I n many cases the problem is one of error minimization. If it is desired to maximize profit or yield, this is readily performed by a sign change. We therefore define a minimum error function:
En
(%-I)
Min
=
mi
i=n,
...,N
[,g
t i (Xi-1,
mf)
t=n
1
+ F(xN)
E, is, in fact, a function only of x,-t. Once X,-I is fixed, the 1, . . . N , are determined by the minipolicies m f , i = n, n mization procedure, so that E,, is then uniquely determined.
+
Applying dynamic programming techniques, we have:
E, (x,-~) = Min
converges to its minimum value.
mn
{t,
f
+
Relaxation Procedure Based on First Variations
T o obtain an improved approximation at any stage of the calculation, expand t,@+l), f,@+l), p n ( ( + l )in first-order Taylor series about the ith iteration:
+ 6 ~ , - ~ ' vXt,(Q + am,' vrnt,({) f,(l+l) f n ( f )+ 6 ~ , - ~ vX ' f n ( Q+ am,' vrnf,(O pn(f+')= pn(Q-+ 6x,-1' v, pn@)
or : Min [t,
t,(f+') = t,(f) is
where = x,-l(f+l)
- x,-l(f);
am, = m,('+l)
- m,(f)
The expansions are now substituted in e(i+l), and the simplifications performed which result from the neglect of second-order terms, and also from the use of the relation x,(f)
- ~ , - ~ ( f ) = f,(t)(xn-l(f), m ( f ) ) ;
x,(O
mn
-
where
294
lhEC FUNDAMENTALS
(Xn-1)
= 0
(6)
which is the discrete form of the Hamilton-Jacobi equation. Furthermore, an expansion of the error function about the optimum path will give an approximation to the new optimum control, corresponding to changes in input conditions, 6Gn-.l = xn-l in-l. This is referred to as the quasi-optimum control equation, which defines the quasi-optimum control m,+. A second-order Taylor series expansion of E,,(x,,-t) about the optimal path is written as
a
Upon grouping the terms so that e ( f )appears, and using Equation 3 for the adjoint variables, the result is simply:
+ E,+I (Xn)] - E,
with
The boundary conditions are obtained by noting that = F(XH)
EN+1 (XN)
2 $,
which yields, upon expanding F ( x N ) about F ( k N )to the same order, and matching coefficients EN+l(EYN)
(74
= V x FGN)
S‘V
P,
FtN)
=
1
=
(7b)
F(&
- V,,2
2
(amn+ W)
we have, upon differentiating Equation 12, and making use of Equation 13 : dhn(Xn-1) dxn-i
-
btn+ bxn-i
+
Ijn
(
I+-
b/n+) bxn -I
+
(74
I t is convenient to introduce hn (xn-1) = Min [ t n mn
and similarly to expand h,
+ E,+i,z (xn)I
( ~ ~ - 1about )
hn (xn-1) = hn,z
(in-1)
(8)
the optimal path:
+ 0 16L-11~ By differentiating Equation 13 with respect to x,-1 one obtains
where hn,2 (xn-1) = hn
(in-1)
+
SGn-1’
Vx hn
(L-1)
1 6xn-1 I 2
-
vxx2
hn
(L-1)
6k-1
Keeping only the terms of degree two or less, Equation 6 becomes :
- En,z (xn-1)
0
(9)
I n order to satisfy Equation 9 for all values of these expansions hold, it is necessary that:
for which
hn,z (xn-1)
$n-1
= Vz hn(EYn-1);
=
1 2
p n - 1 = -Vxx2 h ( k - 1 )
(10)
Finally, the minimization procedure given in Equation 8 defines the quasi-optimum control, m,+: h n ( i n - J = Min [t, mn
+
En+l(%J
+
i n ’ 6xn
+ 6xn’f’n6xn]
By substituting these expressions, evaluated along the optimal path, into Equation 10, the recursion equations for the parameters turn out to be:
where
(11)
which, upon insertion of the quasi-optimal control, can be written as:
with
where
in =
t,+ = t , ( x n P l ,m n + ) ; f,+ = f, (x,--l, m,+)
For the present it is assumed that there are no control constraints. Also, for simplicity of exposition, we henceforth consider a system having only one state and one control variable. The results can be easily extended to a pth-order system having q control signals I n the absence of bounds on the control signal, Equation 11 can be made explicit by the relation:
In
tn(2n-19
an);
= fn(2n-1, f i n ) ;
fin = ~ n ( 2 n - l ) Pn =
Tn(2n-1)
The terminal boundary conditions are given by Equations 7b and 7c. Finally, the quasi-optimum control is given by Equation 13, or by its linearized form:
where Equations 10 and 12 now permit the evaluation of the parameters. Introducing VOL. 6
NO. 2
MAY 1967
295
Equation 16 is in fact a feedforward control equation, since it gives the quasi-optimum control, m,+, according to variations This point does not arise in the continuous in the feed, x,-1. case, since the quasi-optimum control equation
can be considered as a feedforward as well as a feedback control equation. Knowing 2, and f i n , the set of Equations 14, 15, and 16 allows the construction of a quasi-optimum control. We now go back to the original problem of determining 5, and f i n by a relaxation procedure based on second variations. I n the expression for e@+'): N
(t,("l)
#(i+l) =
- (xn(i+l)1 + F(xN('+l))
+ p,("+l)Lf.("+')
n=l
~,-1(~+'))]
t,,(;+l), f,,(;+I), p n ( ' + l ) , F ( x N ( ~ + ~are ) ) expanded in secondorder Taylor series about the ith iteration:
df ,({) , the partial derivative of bm, (I) the Hamiltonian with respect to the control variable, vanishes on the optimal path. Because of the fixed boundary conditions for x, and the boundary conditions in Equations 15, simplifications occur in the first and second terms, so that: where
S,(I)
dt,
= -
+
Pn(')
~
I n order to compute the last terms, we introduce the linearized form of the state equation:
so that finally: N
The equations defining pn@) and P,(()are taken to be the ones which define g n and p , in Equations 14 and 15, where the symbol ( A ) is replaced by the superscript ( ) ( I ) . According to the relation Pn(i) =
12 bxn-l(Q
derived from
Equation 7, the expansion ofp,(I+l) is
r
Thus, a second-order expansion of the error measure leads A possible algorithm for am,@), obtained by minimizing the right side of Equation 17 at each 0, is: stage and assuming T,Ci) to a quadratic form in 6m,(I).
*
fSm,(i) =
1 - T,,(I)
x,(i)
- ,y,-l(t)
= fn(i)(xn-l(i),
m,(i));
xo(i) =
a
+ R,(i)Gxn-l(i)]
~
The condition e(i+l) Bringing these expansions into the expression for e(i+l), and performing the simplifications which result from neglecting third-order terms, and from use of Equation 16 and the relation
d
[s,(~)
Tn@)
+ 2 R n ( i ) 6 ~ n - l (>i ) ]0
I n order to restrict the step size, a constant e ( i ) , 0 < introduced, so that the iterations are performed with:
6m,(o =
. . . ,N
< e ( i ) is then achieved, provided that
S,@) ~
n = 1, 2,
-
- [I,(i)
T,(i)
+ Rn(i)Gxn-l(i)]
< 1, is (18)
The form of Equation 17 is similar to the algorithm for the continuous case. However, this result could not be derived a priori, since additional terms arise in the expression for R,, T,, and e("+'), because operations such as integration by 296
I&EC FUNDAMENTALS
360
parts cannot be performed in the discrete case. Moreover, a direct expansion analogous to that used in deriving the Hamilton- Jacobi equation for continuous systems leads to secondx , - ~ ) which ~ cannot be made arbitrarily order terms of O(x, small for fixed N . However, as N - . co the discrete case can be shown to tend toward the continuous case. A more complex, but possibly more efficient, algorithm results from direct minimization of the right-hand side of Equation 17. One defines a new error measure: V p ( * )=
-
350
F e
N
a,, where
n =p
¶
9, is the term within the brackets on the right-
f
E340
hand side of Equation 17. This quadratic form can be minimized, subject to the linearized state equation, by procedures analogous to that given by Merriam (1964) for the continuous case. As noted earlier (1967), this is one form of the accessory minimization problem of Jacobi. These results can be extended readily to a pth-order system containing q control variables. The appropriate set of costate equations is given by: pn-l(i) -
-
P,-~(C
1 pn(i) = 2
(p,(i)Bn(U)
-
1
p,(i)
= V,H,(i'
r-"
330
(19)
10
0 Stage Number
H,(s) + pn(f)Bn(i)f
vxx2
Figure 1. Successive approximations to a 10-stage policy, using first variations
(R,(f))'(T,(i))-lR,(t)+ (B,(i))/ P,(f)B,(O (20)
in the nth tank. 0 is the mean residence time in each tank. The case where the tank volumes are not equal offers no additional difficulties. Numerical values are chosen to be the same as those used in the continuous case (Fine and Bankoff, 1967) for the purpose of comparison. Then, if N = 10, 6 = 1 minute, in order to retain a total residence time of 10 minutes. I t is desired to maximize xN2. If we therefore define t, = x,-12 xn2 E - f n 2 , the problem is now to minimize e =
with boundary conditions : p"f'
=
vx F ( X N ( * ) )
and
P,(Q = v.2 F ( X " f ' )
-
and where
H,(i) =
t,(O
+ Pn(i)tf,(o
B,(Q =
[v, f , ( i ) ' ] t ;
R,(i) =
v,,2
Z1t,subject to Equations 22.
[v, fa(()']' Hn(Q + 2 p,(i) cn(0+ B,(i) p,(i) cn(i) + C n ( i )=
(B,(i) pa({)C,(i))'
T,(i)
= vmm2H,(t)
+ 2 Cn(i)P,(i) Cn(O
Then an iteration equation for 6rn,(i), which ensures the convergence of toward its minimum value, is given by:
6m,(i) =
- e ( i ) (T,ci))-l(vmH,(i) + R,(i) 6x,-l(i))
(21)
Application to Sequence of Stirred Tank Reactors
We consider a sequence of N stirred tank reactors in which the two consecutive second-order reactions A -+ B -+ C take A material balance over place. B is the desired product. the nth tank gives, after a little manipulation, xnl
- x,-ll
= fnl(xn-l, m,)
(-1
+ (1 f 4 e k , ~ x n - 1 1 ) i ~ 2 ] /k(,2v ) x-2
{ -1
E
- x,-12
= f,2(X,-I,
~ , - ~ 1
m,)
+ (1 + 2 k,2)[i + 2 + (1 + 4 0 k,'x,-~')~~~]/k,~~'~~- x,-I' k,1e
(~,-~1
x,-12)
(22)
where kni = a i exp (-Ei/Rm,), i = 1, 2. I n Equations 22, m,, the temperature of the nth stage, is the control variable, and x,1 and xnz are the concentrations of A and B, respectively,
n=
The expression for the Hamil-
+
tonian is now H, = (- 1 4-pn2)fn2 pnlfnl,and the first-order algorithm given above, using Equation 5a for the control iteration, led to the results shown in Figure 1. A nearly optimum concentration of 0.819 mole per liter was achieved in nine iterations, which required 0.11 minute of computer time. The step size was held equal to two degrees, and the computations were stopped when the improvement in the product concentration was less that 0.01 mole per liter. This algorithm gives a rough idea about the optimal control and is convenient for rapidly finding a primary trajectory, since it is easy to determine what the step size should be. When the profile becomes discontinuous, s ( i ) = bH(i)/bm(i)changes its sign, indicating proximity to the optimal path. A more sophisticated first-variation algorithm, or a second-variation algorithm, should then be used. With the second-variation algorithm given above, starting from the same initial temperature profile, a slightly improved yield of 0.820 was achieved in 15 iterations, requiring 0.20 minute of computer time. The optimum concentration and temperature profiles are shown in Figure 2. Also shown is the quasi-optimum concentration profile, obtained from Equation 16, corresponding to the perturbed initial condition, x,1 = 0.6 and xO2 = 0.4. This quasi-optimum path was found to be very stable. This implies that the corresponding quasioptimum control could be used as a feedforward control for controlling load disturbances in the sequence of stirred tanks. When the number of stages, N , becomes large, terms in ( x , - x,J2 can be neglected. I t can then be seen that the VOL. 6
NO. 2 M A Y 1 9 6 7
297
i
previous derivations lead to equations which are precisely the discrete forms of the continuous equations. Applying these equations to this system, and taking a small step size in Equation 20, the results, shown in Figure 3, are seen to be comparable with the results of the continuous case, shown in Figure 4 of Fine and Bankoff (1967). Similar results with a series of 40 stirred tank reactors have been obtained. Conclusions
Relaxation techniques based on first and second variations can both be extended to a discrete case. The extension of first-variation techniques is straightforward, but not so those based on second variations, since the expansion of the discrete error measure gives rise to additional terms. The quasioptimum control has to be formed with the discrete form of the Hamilton-Jacobi equation, eventually leading to an algorithm very similar to that for the continuous case. An application of both algorithms is given along with a demonstration of the quasi-optimum control equation, which leads to a feedforward control.
'
Acknowledgment
0.0 0
10
Stage Number Figure 2. Approximate optimal temperature and concentration profiles in a: 10-stage cascade, using second variations Quasi-optimal path is result of input E = 1. concentration disturbance
We acknowledge fellowship support from the French government, as well as a generous grant of computer time from the Northwestern University Computer Center. Nomenclature a
Bn, Cn
C'
= initial state vector = Jacobian matrices, defined after Equation 20_ = space of continuously differentiable func-
tions error index, defined by Equation 1 EI, Ea activation energies = minimum-error function, defined before En(xn-1) Equation 6 = p- and q-dimensional Euclidean spaces EP,Eq = second-order expansion of the minimumEn,Z(xn-d error function = state dynamic function, Equation 2 fn(x,-l, m,) FbN) = terminal penalty function H,(xn-l, m,, p,) = Hamiltonian function = function defined by Equation 8 hn(x6-1) = second-order expansion of h,(xn-l) $,2(xn-1) = coefficients in the expansion of h , ( x , - ~ ) hn, Hn = rate constants = control vector = index referring to stage number = vector and matrix of costate variables = ideal gas constant = second partial derivative matrix of hn(xn-l) with respect to x,-l and mn = partial derivative of the Hamiltonian with respect to m, = increment in profit function = second partial derivative matrix of h,,(x,-~) with respect tom, = state vector = incremental vector = frequency factors = step size = residence time in each tank = optimum path = ith iteration = quasi-optimum path
G
Stage Number Figure 3. Improved estimate of optimal temperature and concentration profiles in a 10-stage cascadel with a smaller step size E
298
= 0.2
l&EC FUNDAMENTALS
= =
. . ., 4: .
=
(x1, XP,
= =
Jacobian matrix operator, inner product
K*) ' (imm , . . , G)
vmVX'
mx’
= (xm’)’= outer product
literature Cited
Bliss, G. A., “Lectures on the Calculus of Variations,” University of Chicago Press, Chicago, Ill., 1961. Fine, F. A., M.S. thesis, Northwestern University, Evanston, Ill., 1965. Fine, F. A., Bankoff, S. G., IND.ENG. CHEM.FUNDAMENTALS 6, 288 (1967).
Halkin, H., in “Advances in Control Systems. Theory and Applications,” C. T. Leondes, Ed., pp. 173-96, Academic Press, New York, 1964. Jordan, B. W., Polak, E., S.Z.A.M. J.Control 2, 332 (1964). Merriain, C. W., 111, Information and Control 8 , 215 (1965). Merriam, C. W., 111, “Optimization Theory and Design of Feedback Control Systems,” Chap. 10, McGraw-Hill, New York, 1964. Rosen, J. B., “Optimal Control and Convex Programming,” Math. Res. Center, University of Wisconsin, Madison, MRC Rept. 547 (1965); Proceedings of IBM Scientific Computing Symposium on Control Theory and Application, pp. 223-37, 1964. R ~J. B.,~S.Z.A.M. ~ J. ~contro14, , 223 (1966). RECEIVED for review June 20, 1966 A C C ~ P T EDecember D 19, 1966
OPTIMUM CONTROL OF A CLASS OF DISTRIBUTED-PARAMETER PROCESSES LOWELL B. KOPPEL School of Chemical Engineering, Purdue University, Lafayette, Znd.
Optimal control of a class of processes described by partial differential equations is considered, A transfer function involving a term 1 exp(-s - k) can represent dynamics of these as well as more complex processes. The optimal control of the distributed-parameter process is derived from that of a related lumpedparameter process, under not unreasonable restrictions, thus circumventing the computational complexity suggested by other studies of partial differential equations. Time-optimal controls are derived for some examples, and are found to require an infinite number of switches between nonextremal values, even after the process output comes to rest at the desired final state.
-
control of linear, lumped-parameter processes has Recent books have presented impressive collections of solutions to this problem, derived primarily from Pontryagin’s maximum principle (7, 6). A class of processes with distributed parameters has received recent attention by Denn, Gray, and Ferron (5), who derived necessary conditions for optimal control of tubular processes having radial and axial transport, with the steadystate axial boundary condition-e.g., wall heat flux profileserving as the manipulated variable. Denn et al. also refer to previous theoretical studies on optimization of systems described by partial differential equations. I n general, the solutions to these theoretical problems are so complex as to preclude implementation. I n the present paper we consider a class of distributedparameter processes for which the optimal control is simple and may be constructed from the already known optimal control of a related lumped-parameter process. I n particular we consider time-optimal control of the process with transfer function PTIMAL
0 been studied extensively.
sion, perfect radial mixing, and negligible wall capacitance are assumed. T h e objective is control of the exit fluid temperature by manipulation of wall temperature or wall heat flux, each of which is assumed constant with length but variable with time. We further assume that the process is initially at steady state. For the wall temperature manipulation, the partial differential equation describing fluid temperature T(y,O)is
Boundary and initial conditions are:
Defining the normalized variables
First we show that this transfer function represents the dynamics of a practical class of distributed-parameter processes. Process Dynamics
We begin by presenting some theoretical processes described by Equation 1. Consider a tubular heat exchanger in which constant physical properties, plug flow, negligible axial diffu-
reduces Equation 2 and its conditions to:
bc bt
+ abcx = m ( t ) - c -
VOL. 6
NO. 2
M A Y 1967
299