Toward Fast Dynamic Optimization: An Indirect Algorithm That Uses

Jun 17, 2018 - Fax: +49 391 6110 353. ... Dynamic optimization plays an important role toward improving the operation of chemical systems, such as bat...
0 downloads 0 Views 670KB Size
Subscriber access provided by UNIV OF DURHAM

Process Systems Engineering

Toward Fast Dynamic Optimization: An Indirect Algorithm that Uses Parsimonious Input Parameterization Erdal Aydin, Dominique Bonvin, and Kai Sundmacher Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.8b02109 • Publication Date (Web): 17 Jun 2018 Downloaded from http://pubs.acs.org on June 25, 2018

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Toward Fast Dynamic Optimization: An Indirect Algorithm that Uses Parsimonious Input Parameterization Erdal Aydin,a,b Dominique Bonvin,c Kai Sundmacher a,d,* a

Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraße 1, 39106 Magdeburg, Germany

b

International Max Planck Research School (IMPRS) for Advanced Methods in Process and Systems Engineering, Magdeburg, Germany

c

Laboratoire d’Automatique, Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland d

Otto-von-Guericke University Magdeburg, Universitätplatz 2, 39106 Magdeburg, Germany

* Corresponding author: [email protected] ; Tel: +49 391 6110 350, Fax: +49 391 6110 353

Abstract Dynamic optimization plays an important role toward improving the operation of chemical systems, such as batch and semi-batch processes. The preferred strategy to solve constrained nonlinear dynamic optimization problems is to use a so-called direct approach. Nevertheless, based on the problem at hand and the solution algorithm used, direct approaches may lead to large computational times. Indirect approaches based on Pontryagin’s Minimum Principle (PMP) represent an efficient alternative for the optimization of batch and semi-batch processes. This paper, which is an extension to our ESCAPE-2017 contribution

2

, details the

combination of an indirect solution scheme together with a parsimonious input parameterization. 1

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

The idea is to parameterize the sensitivity-seeking inputs in a parsimonious way so as to decrease the computational load of constrained nonlinear dynamic optimization problems. In addition, this article discusses structural differences between direct and indirect approaches. The proposed method is tested on both a batch binary distillation column with terminal purity constraints and a two-phase semi-batch hydroformylation reactor with a complex path constraint. The performance of the proposed indirect parsimonious solution scheme is compared with those of a fully parameterized PMP-based method and a direct simultaneous method. It is observed that the combination of the indirect approach with parsimonious input parameterization can lead to significant reduction in computational time. Keywords: Pontryagin’s Minimum Principle, parsimonious input parameterization, constrained dynamic optimization, batch and semi-batch processes, numerical optimization

2

ACS Paragon Plus Environment

Page 2 of 37

Page 3 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

1. Introduction Dynamic optimization is important in the chemical industry because of increased competition and strict environmental regulations. When a reliable process model is available, dynamic optimization can be regarded as an appropriate tool for increasing operational profit, enhancing product quality, and meeting environmental and safety regulations. Batch and semi-batch processes are used widely in high-tech, low-volume and high-added-value production. In recent years, process optimization has become an active area of research in batch processing 1-6. The dynamic optimization problem for batch and semi-batch processes can be formulated as follows 1, 4:

min ∶= ( )  ,()

s.t.  = (, ), (0) = 

(, ) ≤ 0, ( ) ≤ 0

(1.1)

where J is the cost that can always be formulated with respect to the states at the final time 

(Mayer-type),  the cost function,  the state vector (() ∈   ),  the initial conditions,  the

input vector ( ∈   ), S the vector of path constraints that includes input bounds ( ∈  ! ), and

T the vector of inequality terminal constraints (:   →  $ ).

The solution strategies for nonlinear dynamic optimization problems can be divided in two

main categories, which are the direct and indirect approaches 4. Direct approaches are often the preferred strategy, but they can exhibit limitations with respect to feasibility and computational load. On the other hand, indirect approaches transform the original problem into the minimization of a new cost function called the Hamiltonian. The transformed problem is then solved to satisfy 3

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 4 of 37

the necessary conditions of optimality (NCO) defined by Pontryagin’s Minimum Principle (PMP) 7

. The numerical methods present in the literature can be classified and characterized as

illustrated in Fig. 1 8. These methods will be detailed in Section 2.

Dynamic Optimization Problem input discretization

problem reformulation

Indirect PMP Approaches

Direct NLP Approaches state integration

single-shooting feasible-path

• • •

multipleshooting infeasible-path hybrid between sequential and simultaneous

• •

infeasible-path approximation through state discretization

Gradient Methods

Shooting Methods

Simultaneous Methods

Hybrid Methods

Sequential Methods

• •

state discretization

state integration on time stages

• • •

stability problems computationally expensive difficulty with discontinuity in adjoints

%& %



gradient as

• •

input discretization integrate the states forward and the adjoints backwards optimization via steepest-descent or Quasi-Newton



Fig 1. Classification of dynamic optimization methods. In dynamic optimization problems, the optimal solution consists of various arcs 4. The input arcs can be classified depending on their characteristics as follows: An optimal input arc might be

on a lower or upper bound ('() or '*+ ), on a path constraint (,*-. ), or inside the feasible

region as a sensitivity-seeking arc (/0)/ ). Accurate computation of a sensitivity-seeking arc can

be burdensome because, very often, the fine tuning of a sensitivity-seeking arc improves the cost 4

ACS Paragon Plus Environment

Page 5 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

only negligibly. Accordingly, most solution schemes (direct as well as indirect) require fine input discretization levels to obtain accurate solutions. On the other hand, as an alternative to fine input discretization, the sensitivity-seeking arcs can be parameterized parsimoniously using fewer input variables, such as switching times and coefficients of low-order polynomials this methodology at the ESCAPE-2017 conference in Barcelona

12

9-11

. We presented

. In this special issue, we

discuss the structural differences between direct and indirect approaches and detail the implementation of an indirect solution scheme that uses parsimonious input parameterization. The paper is structured as follows. Section 2 revisits the available dynamic optimization methods along with the solution algorithms and discusses the structural differences of direct and indirect approaches. Section 3 details the proposed indirect parsimonious dynamic optimization algorithm. Section 4 presents two case studies, namely, a batch binary distillation column with terminal purity constraints and a semi-batch reactor with a complex state path constraint. Finally, Section 5 concludes this study.

5

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

2. Numerical Methods

2.1 Direct NLP Approaches Direct approaches solve the original problem given in Eq. 1.1. Either sequential or simultaneous strategies can be used, which is detailed next.

2.1.1

Sequential vs. Simultaneous Strategies

Direct (single-shooting) sequential methods discretize the input vector using, for example, piecewise-continuous elements or polynomial functions. The dynamic system equations are then integrated to final time. These methods are often called “feasible-path” methods because the dynamic model equations are satisfied at all times. Through input discretization and system integration, the dynamic optimization problem can be reformulated as a nonlinear programming problem (NLP) and solved via the use of an NLP solver 13, 14. It might be necessary to represent the input profiles using a relatively coarse parameterization to guarantee affordable computational time

15

. However, note that the accurate detection of switching times and the

satisfaction of path constraints usually require a fine input discretization. Note that it has also been proposed to use semi-infinite programming to deal efficiently with path constraints 16. In contrast, in direct simultaneous methods, the dynamic optimization problem is transformed into algebraic equations via input and state discretization. Collocation techniques are often the preferred strategies for this transformation. Application of full discretization results in a large algebraic system of equations. Next, the transformed optimization problem is solved using a NLP solver, which simultaneously takes into account the system equations and the constraints and optimizes the cost

17, 18, 19

. Note that, instead of accurate integration, the state equations are

6

ACS Paragon Plus Environment

Page 6 of 37

Page 7 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

approximated at discrete time instants. Hence, this approach is referred to as an “infeasible-path” method. Multiple shooting is another type of direct solution method. It represents a hybrid between simultaneous and sequential methods. In this method, the time interval is divided into stages, with the initial conditions for each stage being included as additional degrees of freedom in the optimization problem. This procedure is also referred to as “infeasible-path”, although the integration is as accurate as in the sequential methods 4.

Efficient implementation of direct

multiple-shooting methods for a variety of optimization problems is well documented in the literature 20-25. Direct approaches are usually the procedure of choice to solve constrained nonlinear dynamic optimization problems. However, for large time horizons (which is the case for batch and semibatch processes) or for fine input discretization levels, the standard NLP algorithms might turn out to be computationally expensive due to the matrix factorizations that are required in the solution steps 8, 26. This issue will be discussed in more details in the next subsections.

2.1.2

NLP Solution Algorithms

Upon applying a direct solution algorithm, the dynamic optimization problem given in Eq. 1.1 is mapped into the following general (static) NLP:

min 2() 1

s.t. 3() ≤ 0

ℎ() = 0 7

ACS Paragon Plus Environment

(2.1)

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 8 of 37

where 2 is the scalar cost function,  the vector of decision variables, 3 the vector of inequality constraints, and ℎ the vector of equality constraints. The Lagrangian 5 is defined as: 5(, 6, 7) ∶=2() + 6 9 ℎ() + 7 9 3()

(2.2)

where 6 and 7 are the Lagrange multipliers for the equality and inequality constraints,

respectively. Assuming that  ∗ is a local minimizer, the first-order necessary conditions of optimality, also referred to as the Karush-Kuhn-Tucker (KKT) conditions, read: ; ∗ ) ;1

= 0

stationary conditions

3( ∗ ) ≤ 0, ℎ( ∗ ) = 0 primal feasibility conditions 7∗ ≥ 0

7 ∗9 3( ∗ ) = 0

dual feasibility conditions

complementarity slackness condition (2.3)

with ( ∗ , 6∗ , 7 ∗ ) being referred to as a KKT point when the conditions given in Eq. 2.3 hold 27, 28.

Please note that the KKT conditions given in Eq. 2.3 call for some regularity conditions 8. The

two common solution strategies for such NLP problems, namely, sequential quadratic programming (SQP or active-set type) and interior-point methods (barrier type), will be discussed next.

2.1.2.1 Sequential Quadratic Programming (SQP)

The general idea of SQP is to solve successive quadratic approximations to the original NLP and converge to the optimal solution. This method can be categorized as an active-set type

28-30

. The

NLP problem given in Eq. 2.1 is reformulated by adding the slack variables z to the inequality constraints as follows:

8

ACS Paragon Plus Environment

Page 9 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

min 2() 1,A

s.t. 3() + B = 0, B ≥ 0

ℎ() = 0

(2.4a)

Upon defining the extended Lagrangian ℒ, it can be shown that solving Eq. 2.4a is equivalent

to solving the following system of equations:

min ℒ(, B, 6, D): = 2() + 69 ℎ() + E 9 (3() + B) 1,A

s.t. 3() + B = 0, B ≥ 0

ℎ() = 0

(2.4b)

where D are the Lagrange multipliers for the reformulated equality constraints.

The standard approximation in SQP is a Taylor-series expansion of the original NLP. Given the current value of the variables (F ,BF ) and of the Lagrange multipliers (6F , DF ), the quadratic

problem for the iteration step (G1 , GA ) becomes:

1 min G1 9 ∇11 ℒ(F , BF , 6F , DF )G1 + ∇2(F )G1 H ,HI 2

§

s.t. ∇3(F )9 G1 + GA = −(3(F ) + BF ) )9

∇ℎ(F G1 = −ℎ(F )

(2.5)

GA ≥ −BF

In most cases, the Hessian matrix ∇11 ℒ(F , BF , 6F , DF ) is computed as a positive-definite matrix

using finite differences or BFGS-like algorithms to approximate the original Hessian

8, 29

. The

problem given in Eq. 2.5 can be solved via Newton’s method, which requires the solution to the following linear KKT system:

9

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

P O O N

∇11 ℒ(F , BF , 6F , DF ) 0 ∇ℎ(F )9 0 9 ∇3(F ) Q 0 RF

Page 10 of 37

∇ℎ(F ) ∇3(F ) G1 −∇2(F ) V −ℎ(F ) 0 0 U GA W Y=W Y G −3(F ) − BF 0 0 U = −SF DF 0 SF T GX

(2.6)

FZ[ = F + G1 BFZ[ = BF + GA 6FZ[ = 6F + G= DFZ[ = DF + GX

(2.7)

where RF and SF are diagonal matrices that are computed from F and BF . Then, the next iterate is obtained as follows:

The iteration steps given in Eqs 2.6 and 2.7 are continued until the optimal solution is obtained. Global convergence can be achieved using line search and filter methods 8.

2.1.2.2 Interior-Point Methods

Interior-point methods relax the complementarity conditions 7 ∗9 3() = 0 and solve the

relaxed problem. The basic idea is to add log-based penalty terms to the cost function to enforce convergence inside the feasible region

31

. To illustrate the idea of the algorithm, let us consider

the following NLP that can be seen as a generalization of Problem 2.4a:

min 2() 1

s.t. ℎ() = 0,  ≥ 0

(2.8)

A penalty term can be included in the problem to give: 

min 2\() : = 2() − 7 ] ln (_ ) 1

_`[

s.t. ℎ() ≤ 0,  > 0

10

ACS Paragon Plus Environment

(2.9)

Page 11 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

where b1 is the number of inequality constraints in Problem 2.8 and 7 is the penalty factor. It

should be noted that the log-barrier term becomes unbounded at  = 0. Hence, the path generated

at each iteration must lie in the strictly positive region ( > 0) for the reformulated inequality constraints. When the penalty factor 7 → 0, the solution to Problem 2.9 approaches that to

Problem 2.8. This solution needs to fulfil the first-order NCO:

∇2() + ∇ℎ()6 − 7c d[ e = 0 ℎ() = 0

(2.10)

where c = Gfg3{}, e = [1,1, … . ,1]9 . Eq. 2.10 represents the primal optimality conditions. However, it is much easier to solve the following equivalent primal-dual system of equations:

∇2() + ∇ℎ()6 −  = 0 c = 7e

(2.11)

ℎ() = 0

where  are strictly positive dual variables that are substituted for the barrier term 7c d[ e.

Linearization enables easy solution to the primal-dual problem 8. Given the iterate (F , 6F , F ), the search direction is computed as follows:

∇11 ℒ(F , 6F , DF ) ∇ℎ(F ) −Q G1 ∇2(F ) + ∇ℎ(F )6F − F 9 ∇ℎ(F ) 0 0 p qG= s = − n n ℎ(F ) p 9 G c  − 7e oF 0 cF r F F

(2.12)

where c = Gfg3{} and o = Gfg3{}. The iterations are repeated as per Eq. 2.7 until the optimal solution is found. Line search and filter algorithms are very important for enforcing global convergence 8, 32.

11

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 12 of 37

2.1.2.3 Computational Aspects

The solution using direct NLP approaches requires solving the system of equations 2.6 or 2.12, the factorization of which might turn out to be relatively expensive for certain types of problems, constraints, time horizons and levels of discretization. This holds for both active-set and barriertype methods. The computational complexity increases cubically with the time horizon or the level of input discretization due to the iterative updating of the Lagrange multipliers through Newton steps 26, 33. In contrast, the indirect-based methods discussed next can be adapted to avoid these expensive factorizations. Consequently, the computational complexity can be adapted to increase linearly with the horizon length 1, 26. Several sensitivity-based methods have been discussed in the literature to minimize the computational requirements of NLPs. These methods basically take into account the previously computed solutions and the sensitivities of the NLPs. Efficient implementations have already been documented, especially for large-scale processes and nonlinear model predictive control 34-38

23,

. However, the application of these sensitivity-based methods is still an open research

question for batch and semi-batch processes that are characterized by large operational variations. For a detailed review on recent advances in sensitivity-based methods, the reader is referred to Biegler et al. 39.

12

ACS Paragon Plus Environment

Page 13 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

2.2 Indirect PMP Approaches

2.2.1

Transformed Problem

In indirect approaches, the optimization Problem 1.1 is transformed into the minimization of an Hamiltonian function as follows 4:

min &(): = 69 (, ) + 7 9 (, )

 ,()

s.t.  = (, ); (0) =  ;

;u ;v ;9 69 = − , 69   = w + E 9 w ; ;1 ;1 ;1 



7 9 (, ) = 0; E 9 (( )) = 0 ;

%& %(, ) %(, ) = 69 + 79 =0 % % %

(2.13)

&  = 69 (, )| + 7 9 (, ) | = 0 The transformed problem is then solved to satisfy the NCO as indicated next.

If Problem 1.1 has a feasible solution ∗ (. ) with the state profiles  ∗ (. ), then there exist Lagrange multipliers 6∗ (. ), 7 ∗ (. ) and E ∗ such that the following equations hold for  y ( ,  ) 28: & ∗  ∗ (), ∗ (), 6∗ (), 7 ∗ () = 6∗9  ∗ (), ∗ () + 7 ∗9 ()  ∗ (), ∗ (),

 ∗ () = &=  ∗ (), ∗ (), 6∗ (), 7 ∗ (),

6∗9 () = −&1  ∗ (), ∗ (), 6∗ (), 7 ∗ (),

 ∗ ( ) =  , 6∗9 ( ) =

;v

w + E ∗9 ;1 w ,

;1 

;9



states

co-states initial conditions terminal sensitivities

0 = &  ∗ (), ∗ (), 6∗ (), 7 ∗ (),

path sensitivities (stationarity)

0 ≥  ∗ (), ∗ (),

path feasibility 13

ACS Paragon Plus Environment

(2.14)

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

0 = 7 ∗9 ()  ∗ (), ∗ (),

Page 14 of 37

0 ≤ 7 ∗ (),

path complementarity

0 ≤ E ∗ ,

terminal complementarity

dual path feasibility

0 ≥  ∗ ( ),

terminal feasibility

0 = E ∗9  ∗ ( ),

dual terminal feasibility

&  = 0

transversality condition

where H is the Hamiltonian function, µ the vector of Lagrange multipliers for the path

constraints (nS-dimensional), λ the vector of co-states or adjoints (nx-dimensional), and E the

vector of Lagrange multipliers for the terminal constraints (nT-dimensional). For a detailed 7

derivation, the reader is referred to Bryson . PMP can be used to generate input and state trajectories that depend on the initial and terminal states. For simple unconstrained problems, the solution to the indirect problem can be computed analytically or else numerically by solving a standard two-point-boundary-value problem. For constrained systems, the indirect formulation results in multi-point boundary-value problems that might be difficult to solve with the methods currently available in the literature 40. Indirect approaches have been used for various types of optimization problems

5, 41-48

. The

detection of the correct arc structure and the computation of optimal switching times are key to indirect approaches, in particular in the presence of path constraints

1, 40, 49, 50

. Nevertheless,

nonlinear path-constrained dynamic optimization problems are rather demanding for indirect approaches. The convergence of shooting-type and gradient-based methods depends on a number of conditions 8, 49. The main drawback of the shooting-type methods is the instability stemming from the forward integration of the co-states, in particular in the absence of a good initial guess. The major 14

ACS Paragon Plus Environment

Page 15 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

problem is the lack of initial values for the co-states. In contrast, for gradient-based algorithms, although a good initial guess might be useful to speed up convergence, it is usually not needed for convergence. Hence, instead of the simultaneous forward integration of the state and co-state equations, one can integrate the state equations forward in time and then the co-state equations backward in time. Eventually, optimization can be implemented via a gradient-based algorithm, in which a good initial guess is not crucial for convergence 4, 7, 49, 51.

2.2.2

Gradient-based Indirect Algorithm

To deal with the issues discussed in the previous section, an indirect control vector iteration scheme has been proposed 1. The performance of this scheme has been compared with a direct simultaneous method in terms of both optimal cost and CPU time. In addition, upon computing the switching functions, the optimal switching sequences and the arc types can be detected. The reader is referred to our previous works for details 1, 5. In this control-vector-iteration algorithm, for example, the input profiles are parameterized using N (typically piecewise-constant) elements. Then, the state equations are integrated forward in time and the co-state equations backward in time, which allows computing the gradient ;u ;

= 69

;z(1,) ;

+ 79

;{(1,) ;

. The decision variables are updated using a gradient-based (either

steepest descent or Quasi-Newton) scheme. Indirect adjoining can be used to deal with the pure-state path constraints. Indirect adjoining enables the activation of violated inequality state path constraints with only one explicit computation when they are infeasible. Including logic decision in the iteration scheme and activating the inequality path constraints (input or state) when they are infeasible let us avoid the computation of the exact values of the Lagrange multipliers for path constraints. Hence, the 15

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 16 of 37

original problem can be solved as an unconstrained one using a gradient-based indirect approach. In addition, if a constraint is violated, the corresponding Lagrange multiplier is penalized. This way, the iteration is always pushed inside the feasible region, similar to barrier-type methods. We denote this method “gradient-based indirect approach”.

The various algorithmic steps proposed in previous work are as follows 1: 1) The problem is formulated as the solution to differential equations for both the states and the co-states as given in Eq. 2.13. 2) Indirect adjoining is implemented to tackle pure-state path constraints 1. This way, when an iteration is infeasible, the corresponding path constraint can often be activated via a single explicit calculation. Otherwise, activation is performed using a root-finding algorithm.

3) The input profiles are discretized as u(t) = U(U), where U ∈  (

+ |)

with N the number

of discrete input values and nu the number of inputs.

4) The Lagrange multiplier vectors are discretized as µ (t) = M (M), where M ∈  (} + |) with ns the number of path constraints.

5) If a path constraint is not satisfied at the discrete time instant k, set Μ (j,k) = K > 0, and activate the constraint. Otherwise, set Μ (j,k) = 0 52. 6) The input values in U are updated via a Quasi-Newton step (or steepest-descent) until a pre-defined optimality criterion is satisfied. Remark 1. Steps 2 and 5 represent the interplay of states, co-states and Lagrange multipliers, which together reduce the complexity of the solution steps. While the co-states are handled via

16

ACS Paragon Plus Environment

Page 17 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

integration (as given by PMP), the Lagrange multipliers for the path constraints are eliminated through Step 2 followed by Step 5. Accordingly, only the input values need to be updated, as shown in Step 6 1.

2.2.3

Computational Aspects

Indirect adjoining can be used to reformulate pure-state path constraints as mixed state-input constraints, which allows computing certain inputs explicitly to satisfy the path constraints. This way, the Lagrange multipliers for the path constraints can be eliminated from the optimization problem, with the result that the computational load of the solution algorithm depends mainly on the number of inputs

26, 33

. The computational performance of an indirect scheme with indirect

adjoining was tested on three different problems. The results indicate that the indirect approach can be more effective than a standard direct simultaneous method in terms of CPU time, especially for fine input discretization 1. Although indirect-based methods have been shown to be efficient for the dynamic optimization of batch and semi-batch processes, there is still the requirement of fine input discretization to obtain accurate solutions, which might result in significant computational effort. We will see in the next section that parameterizing the sensitivity-seeking arcs differently, in particular using switching times on the time horizon, can further reduce the computational effort.

17

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 18 of 37

3. Parsimonious Indirect Method 3.1 Basic Idea of Parsimonious Input Parameterization The optimal solution structure, that is, the arc types and input sequences, can be computed using a direct or an indirect approach

1, 4, 15, 16, 18, 22, 40, 53

. Then, a parsimonious input parameterization

of the form u (t) = U (π) can be postulated, where π is the new decision vector, and the effective indirect solution algorithm given in Section 2.2.2 can be applied to the reformulated parsimonious problem

12

. The combination of an indirect approach with parsimonious input

parameterization represents the main idea of this work. As an illustration, a sensitivity-seeking input arc can be expressed as a linear arc between the

two values [ and ~ at the two adjustable switching times [ and ~ that represent the beginning and the end of the arc, thus resulting in the new input vector  = ([ , ~ , [ , ~ )9 . This

reformulation allows applying the NCO as follows:

 (): = 69 ‚ (, ) + 7 9 \(, ) min & €

s.t.  = ‚ (, ); (0) =  ;

 ;u ;v ;9 69 = − , 69   = w + E 9 w ; ;1 ;1 ;1 



7 9 \(, ) = 0; E 9 (( )) = 0;

(3.1)

 () %& %‚ (, ) %\(, ) = 69 + 79 =0 % % % For example, if the solution structure consists of the 3 arcs '*+ , ƒ„ƒ and '() , the sensitivity-

seeking arc ƒ„ƒ can be approximated using linear interpolation between '*+ and '() at the

two switching times [ and ~ , thus giving:

18

ACS Paragon Plus Environment

Page 19 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

'*+ f2 0 ≤  < [ ; Š‹Œ dŠ …() = † ƒ„ƒ () = ˆ‰1 + Ž d ( − [ ) f2 [ ≤  < ~ ; '() f2 ~ ≤  < 

(3.2)

Problem 3.1 can be solved using the gradient-based indirect algorithm given in Section 2.2.2. The use of parsimonious parameterization in combination with the indirect approach can reduce the computational load significantly.

3.2 Computation of Gradients

 ‘ ’

Parsimonious parameterization results in discrete decision variables along the time horizon that include the switching times as well as a few other input parameters. Since the PMP algorithm defines the sensitivity of the Hamiltonian with respect to the input variables, one needs to evaluate

 () ;u ;€

= 69

;z‚ (1,€) ;€

+ 79

;{\(1,€) ;€

to perform the optimization. This can be done by

considering each arc individually and performing a time transformation of the system equations as detailed next. Consider the system equation in Problem 3.1:

 = ‚ (, ), for  y – ,  —

(3.3)

For simplicity of presentation, let us assume that the optimal solution consists of the three following arcs:

1 f2  ≤  < [ ; …() = ˜  f2 [ ≤  < ~ ; 0 f2 ~ ≤  < 

(3.4)

where  is a scalar decision variable, along with [ and ~ . Hence, : = ([ , ~ ,  ) 9 , and we need to compute

,

 ;u  ;u

,

 ;u

; ;Ž ;™

.

19

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 20 of 37

Let us consider the first arc with the single decision variable [ and introduce the dimensionless time š valid between  and [ . Time  relates to the dimensionless time š as follows: (š) ∶=  + š([ −  ), š y [0,1]

which allows writing the states, their derivatives and the system equations in terms of š: H



(3.5)

›(š): = (š)

›(š) = H () Hœ = H () [ H

H

H

[ ›(š) ∶= ‚ ((), [ ) = ((), u = 1)

(3.6)

[ [ ›(š) f2 0 ≤  < [ ; ›(š) = † (~ − [ ) ~ (›(š),  ) f2 [ ≤  < ~ ; Hœ  − ~  Ÿ ›(š) f2 ~ ≤  < 

(3.7)

Repeating this procedure for the second and third arcs gives: H

where ~ (›(š),  ) ∶= ‚ ((š), [ , ~ ,  ) and Ÿ ›(š) ∶= ‚ (›(š), ~ ). Finally, differentiating Eq. 3.7 with respect to  = ([ , ~ ,  ) 9 gives: % = [ ›(š) − ~ (›(š),  ) %[

% = ~ (›(š),  ) − Ÿ ›(š) %~

Note that the gradient

  ;

;

% %~ (›(š),  ) = % %

can be obtained in a similar way. Finally, the sensitivity of the

Hamiltonian with respect to the final time is given as:  ;u ;€

= 69

;z ;€

+ 79

;{¡



20

ACS Paragon Plus Environment

Page 21 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

3.3 Algorithm The parsimonious PMP-based solution algorithm can be outlined as follows 12: Parsimonious PMP-based Solution Algorithm

Consider the optimization Problem 3.1, with N discrete time instants between 0 and  .

∈  (} + |) , and write the co-states as 6 = ¢‚ (, , £, E).

Let us discretize the Lagrange multipliers for the path constraints as µ (t) = M (M), M

Assign values to the penalty parameter K>0, the maximal number of iterations iter_max,

the step size α and the optimality tolerance ε. Set the counter ℎ = 0 and initialize the input vector  along with £ and E .

Integrate the state equations  = ‚ (, ) forward in time and integrate the co-

do h = 1  iter_max I.

II.

state equations 6 = ¢‚ (, , £, E) backward in time. If the jth path constraint

\¤ (, ) ≤ 0 at the discrete time instant k, set

£¥ (j,k) = 0. Otherwise, set £¥ (¦, §) = K,

k=1,..,N.

¥ (¦, §) = ¨‰¥ , for j=1,..,nS,

end If

III.

If the ith terminal constraint  _ (  ≤ 0, set E¥ (i) = 0. Otherwise, set E¥ (i) = K, for i=1,…, nT.

end If IV. V.

Compute the gradient value: ( If ©ª(

) © ;€ ¥

 ;u

) ;€ ¥

 ;u

using Eq. 3.1.

< ε, stop, set ’«¬­ ∶= ’®

else set ¥Z[ ∶= ¥ − ª( )¥ , where ª=0.05. Go to I ;€  ;u

end If end do

21

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Remark 3. This algorithm requires an initial solution to Problem 3.1, using either a direct or an indirect approach, to determine the parameterization scheme. However, this solution need not be obtained using fine discretization to ensure feasibility. It is only an analysis step that helps propose suitable parameterization candidates. Remark 4. The parsimonious parameterization is usually problem specific. It can be extended to more complex problems by using higher-order polynomials instead of linear relations. Remark 5. Parsimonious parameterization is particularly effective for problems with only a few inputs.

Remark 6. The inputs that activate the path constraints ,*-. can usually be computed directly

from the model equations without any optimization 4, 5. This point, which is relevant with respect

to satisfying the complementary slackness condition with the parsimonious indirect approach, will be detailed in the second case study.

Remark 7. A constant step size (ª = 0.05) is used in this work. Note that an adaptive line search, e.g. Wolfe criterion 54, could speed up the method.

22

ACS Paragon Plus Environment

Page 22 of 37

Page 23 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

4. Case Studies In this section, two case studies are presented to investigate the application of the proposed methodology to the dynamic optimization of constrained batch and semi-batch processes. The first problem is the dynamic optimization of a batch binary distillation column that includes terminal purity constraints 1. The second problem deals with the dynamic optimization of a complex fed-batch chemical process in the presence of path constraints. The two problems are solved using both a direct simultaneous method and various variants of the indirect approach. The CasADi toolbox

55

is used for implementing the direct simultaneous

method using the NLP solver IPOPT 32, while the algorithm described in Section 3.3 is used for implementing the parsimonious PMP-based method..

4.1 Batch Binary Distillation in the presence of Terminal Purity Constraints Consider a batch binary distillation column with three equilibrium plates, in which the components A and B (more volatile) are separated from each other, with the molar amounts B and ° in the bottoms and in the distillate tank, respectively. The vapor flowrate in the column is given by R and the liquid flowrate by 5. The objective is to maximize the molar amount ° in the distillate for the fixed batch time  = 3 h, while satisfying the terminal purity constraints of

having at least 80 mol % of B in the distillate (² ) and at most 20 mol % of B in the bottom

product (³ ). The internal reflux ratio ´ = 5/R is the only input variable. The schematic of the column can be found in Aydin et al. 1. Assuming perfect mixing on all stages, negligible vapor

hold-up, constant vapor flow through the column, total condensation in the condenser of

23

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 24 of 37

negligible hold-up, constant liquid hold-up on all trays, and constant relative volatility, the optimization problem can be written, in terms of the more volatile component B, as follows:

max = °( ) ¸()

s.t.

° = R(1 − ´); °(0) = °

¹ = R(´ − 1 ); ¹(0) = ¹

b ³ =  ³ ¹ + ¹ ³ = R(−º³ + ´[ ) ; b³ (0) = b³ 

b [ =  [ £ = R(º³ − º[ + ´(~ − [ )); b[ (0) = b[ 

b ~ =  ~ £ = R(º[ − º~ + ´(Ÿ − ~ )); b~ (0) = b~ 

b Ÿ =  Ÿ £ = R(º~ − ºŸ + ´(ºŸ − Ÿ )); bŸ (0) = bŸ  ºˆ =

b ² = R(1 − ´)ºŸ ;

(4.1)

b² (0) = b² 

ªˆ ; » = ¹, 1, . . . ,3 1 + (ª − 1)ˆ

²   = b² ( )/°( ) ≥ 0.8 ³   = b³ ( )/¹( ) ≤ 0.2

0 ≤ ´() ≤ 1

where ª is the relative volatility, ¹ the initial charge, b³  the number of moles of B in the

charge, bˆ the number of moles of B in the liquid phase on the »th tray, ³  the initial mole fraction of B, ˆ the liquid-phase mole fraction of B on the »th tray, ºˆ the vapor-phase mole

fraction of B that leaves the mth tray, ² the mole fraction of B in the distillate tank, ³ the mole

fraction of B in the bottoms, º³ the mole fraction of B in the vapor that leaves the bottoms, and £ the liquid hold-up on each tray.

24

ACS Paragon Plus Environment

Page 25 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

The optimal solution computed by the direct simultaneous method (DSM), fully parameterized with 500 piecewise-constant input elements, is given in Fig. 2. The solutions suggest total reflux at the beginning of the batch to enhance the composition of B at the top of the column. Then, the sensitivity-seeking arc ´ƒ„ƒ () produces the maximal amount of distillate with

the required purity. At the end, a quick third arc with zero reflux allows recovering the high-

purity material that is still present at the top of the column. The only path constraints are on the input variable, whereas two inequality terminal constraints are present. In this problem, since ´ƒ„ƒ () does not start at 1 and end up at 0, we postulate an unknown

constant reflux ratio (´½ ) for this sensitivity-seeking arc. As a result, the decision variables for the

parsimonious method are  = ([ , ~ , ´½ )9 . Alternatively, the reflux ratio could be defined as

varying linearly between the two switching times, in which case the decision variables would read  = ([ , ~ , ´½[ , ´½~ )9 .

The solutions given by two parsimonious PMP-based methods are also shown in Fig. 2. These solutions are fairly close to those of the corresponding fully parameterized method. Note

that the addition of one parameter (´½[ and ´½~ instead of ´½ ) increases the optimal cost only slightly. The parsimonious input parameterization scheme results in significant reduction of the

computational load of the optimization problem. Fig. 3 shows that the use of parsimonious parameterization combined with the indirect approach remarkably reduces the computational time, in particular when a high discretization level is needed.

25

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

Fully parameterized DSM Parsimonious PMP, constant rsens Parsimonious PMP, linear rsens

Distillate, D (kmol)

0.5

0

0

1

2

3

Bottoms composition, xB

Reflux ratio, r

1

Distillate composition, xD

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 26 of 37

Time (h) 0.8

0.7

0.6

0

1

2

3

60 J = 40.31 J = 40.18 J = 40.25

40 20

0

0

1

2

3

2

3

Time (h)

0.5 0.4 0.3 0.2 0.1

0

1

Time (h)

Time (h)

Fig. 2 Optimal results for Problem 4.1 for fully parameterized DSM and two variants of parsimonious PMP. All methods result in very similar optimal cost values (distillate D) and satisfaction of the terminal purity constraints.

26

ACS Paragon Plus Environment

Page 27 of 37

Fully parameterized DSM Parsimomious PMP, constant rsens Parsimonious PMP, linear rsens

CPU Time [s]

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

10

10

1

0

0

50

100

150

200

250

300

350

400

450

500

Discretization Level

Fig. 3 Computational times for Problem 4.1 using various numerical methods.

4.2 Semi-batch Reactor in the presence of Input and Path Constraints Consider the production of n-tridecanal (nC13al) from 1-dodecene (nC12en) that reacts with syngas (&~ + ¾¿). A semi-batch tank reactor with perfect mixing and gas feeding is assumed. The input

variables are the reactor temperature () and the feedrate () of syngas. The gas and liquid

phases are modeled as ideally mixed phases. The goal is to maximize the amount of tridecanal (nC13al) at the end of the batch. For safety reasons, the total pressure should not exceed specified limits. The reaction network is given in Fig. 4 1, 56.

27

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Fig. 4 The reaction pathway for hydroformylation 56.

The dynamic optimization problem can be formulated as follows:

s.t.

max = ÀÁ[Ÿ‰Â ( )

(),9()

ÀÂ_Ã,_ = ¦_ Ä< + ÀÁ‰ £Á‰ ∑¤∈R D¤,_ ´¤ ; ÀÂ_Ã,_ (0) = ÀÂ_Ã,_ ; i=1, 2,…,7;

Æ_ =

 mol  _ − RÂ_à ¦_ Ä<  (f ∈ 3gÈ) ; Æ_ (0) = Æ_ ; _ = 0.5 É Ê ; i = 1, 2; Rljƒ mol ¦_ Ä< = Ë ´[ =

(§< g)_ À_ ∗ − ÀÂ_Ã,_ , (f2 f ∈ 3gÈ); i = 1, 2 ; 0, (eÌÈe); i = 3,4, … ,7 §[, ()ÀÏ[~„ ÀuŽ ÀÏÐ ; 1 + Ñ[,[ ÀÏ[~„ + Ñ[,~ ÀÏ[Ÿ‰Â + Ñ[,Ÿ ÀuŽ

À ) §~, ()(ÀÏ[~„ − _Ï[~„ Ѩ,~ ´~ = ; 1 + Ñ~,[ ÀÏ[~„ + Ñ~,~ À_Ï[~„

À §Ÿ, ()(ÀÏ[~„ ÀuŽ − Ï[~‰ Ѩ,Ÿ ) ´Ÿ = ; 1 + џ,[ ÀÏ[~„ + џ,~ ÀÏ[Ÿ‰ + џ,Ÿ ÀuŽ ´Ò = §Ò, ()À_Ï[~„ ÀuŽ ;

´Ó = §Ó, ()À_Ï[~„ ÀuŽ ÀÏÐ ;

´Ô = §Ô, ()ÀÏ[~„ ÀuŽ ÀÏÐ ;

28

ACS Paragon Plus Environment

Page 28 of 37

Page 29 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

ØÙ,¤ 1 1 Ú − ÛÜ ;   ¸„

§¤ () = §,¤ exp ×−

ÀÁ‰ =

−Δ¢¤ Ѩ,¤ = exp É Ê; 

−Δ¢¤ = g,¤ + g[,¤  + g~,¤  ~ ; 1 + ÑÁ‰,[ ÀÏÐ

ÀÁ‰,Þ

ßàá,â

À_ ∗ =

À ßàá,â + ÑÁ‰,~ ÏÐÀ uŽ

Æ_ ; &_

;

−ØÙ,u,_ &_ = &_  exp É Ê; 

Æމ () = ÆuŽ () + ÆÏÐ () ; 1 ãg´ ≤ Æމ () ≤ 20 ãg´ ; 0 ≤ () ;

368.15 K ≤ () ≤ 388.15 K

(4.2)

where i designates the component index (i=1,2,…,7 for the liquid phase and i=1,2 for the gas phase), j represents the reaction index, and R is the reaction set. The reader is referred to Hentschel et al. 56 for detailed information related to the process and the model parameters. The solution obtained using DSM is shown in Fig. 5. This solution suggests that the optimal

temperature starts at the minimal level ˆ_ to favor the desired reaction. Then, the temperature

follows the sensitivity-seeking arc ƒ„ƒ () to speed up the production of nC13al by boosting the

forward reaction ´[, while restraining the equilibrium reaction ´~ to go to the right. Finally, the optimal temperature follows the upper level ˆ‰1 to suppress the undesired reactions. Hence, the

optimal temperature profile consists of 3 arcs. Furthermore, it is meaningful to parameterize ƒ„ƒ () linearly between the two adjustable switching times. Alternatively, ƒ„ƒ () could also

be parameterized using a quadratic interpolation between the two switching times. 29

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Since the pressure constraint is active throughout the batch, the optimal feedrate of syngas can be determined by tracking the pressure constraint. This is done analytically using Eq. 4.2 as

follows: Æމ () = Æ[ () + Æ~ () = 0, which gives ,*-. () = RÂ_à æ¦[ Ä< () + ¦~ Ä< ()ç. The cost values obtained by the different methods are very similar to each other, as shown in Fig. 5. Note that increasing the order of the polynomial interpolation in the sensitivity-seeking arc results in a slightly better cost value. Fig. 6 compares the computational times required to solve the problem using the various approaches. The two (linear and quadratic) parsimonious indirect methods are much faster than the fully parameterized approaches. It was observed that the proposed indirect approach combined with parsimonious parameterization schemes can yield very similar optimal cost values and switching structures compared to the commercially available direct approaches. As Remark 6 suggests, the inputs that activate the inequality state path constraints can usually be computed using a system of ODEs. Moreover, the alternative parameterization of the sensitivity-seeking inputs reduces the computational load of the dynamic optimization problems, and therefore the required CPU time decreases significantly.

30

ACS Paragon Plus Environment

Page 30 of 37

Fully parameterized DSM Parsimonious PMP, linear Tsens

4

Parsimonious PMP, quadratic Tsens 2 0 0

10

20

30

40

50

60

70

80

10

20

30

40

50

60

70

80

T (K)

390 380 370

cnC13al (mol/l)

360 0 1

J = 0.596 mol/L

J = 0.594 mol/L

J = 0.5953 mol/L

0.5 0 0

10

20

30

10

20

30

40

50

60

70

80

40

50

60

70

80

21 ptotal (bar)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Feedrate, u (mol/min)

Page 31 of 37

20 19 0

Time (min)

Fig. 5 Optimal solutions for Problem 4.2. All methods result in very similar optimal cost values nC13al (tf). Note that the optimal inputs are computed in order to activate the state path constraint.

31

ACS Paragon Plus Environment

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

CPU Time [s]

Industrial & Engineering Chemistry Research

Page 32 of 37

Fully parameterized DSM Parsimonious PMP, linear Tsens

1

10

Parsimonious PMP, quadratic Tsens

100

150

200

250 300 350 Discretization Level

400

450

500

Fig. 6 Comparison of computational times to solve Problem 4.2 with various methods.

32

ACS Paragon Plus Environment

Page 33 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

5. Conclusions This paper has proposed an alternative indirect optimization algorithm that parameterizes the sensitivity-seeking inputs in a rather parsimonious way. The performance of the proposed algorithm is compared with fully parameterized DSM. The proposed method can solve the corresponding problems much faster, while still producing solutions with very similar cost values. Indirect parsimonious optimization methods possess obvious advantages for (i) complex applications such as stochastic and multi-level optimization, and (ii) real-time applications, where fast implementation is required such as nonlinear model predictive control and multi-stage online algorithms.

6. Acknowledgements The financial support from the Deutsche Forschungsgemeinschaft (DFG) under the grant SFB/TRR 63 is gratefully acknowledged. We would like to thank the editors for the invitation to the special issue for ESCAPE 2017 conference. In addition, EA acknowledges fruitful discussions with Diogo Rodrigues from EPFL.

33

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

References 1. Aydin, E.; Bonvin, D.; Sundmacher, K., Dynamic optimization of constrained semi-batch processes using Pontryagin’s minimum principle—An effective quasi-Newton approach. Computers & Chemical Engineering 2017, 99, 135-144. 2. Bonvin, D.,Srinivasan, B., Hunkeler, D., Control and Optimization of Batch processes:Improvement of Process Operation in the Production of Specialty Chemicals. IEEE Control Systems Magazine, 2006, 26, (6), 34-45. 3. Bonvin, D.; Srinivasan, B.; Ruppen, D. Dynamic Optimization in the Batch Chemical Industry; In Preprints CPC6, Tucson, Arizona,283–307, 2001. 4. Srinivasan, B.; Palanki, S.; Bonvin, D., Dynamic optimization of batch processes: I. Characterization of the nominal solution. Computers & Chemical Engineering 2003, 27, (1), 1-26. 5. Aydin, E.; Bonvin, D.; Sundmacher, K., NMPC using Pontryagin’s mnimum principleApplication to a two-phase semi-batch hydroformylation reactor under uncertainty. Computers & Chemical Engineering 2018, 108, 47-56. 6. Bonvin, D., Optimal operation of batch reactors—a personal view. Journal of Process Control 1998, 8, (5), 355-368. 7. Bryson, A. E., Applied Optimal Control: Optimization, Estimation and Control. CRC Press: 1975. 8. Biegler, L. T., Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes. SIAM: 2010; Vol. 10. 9. Welz, C.; Srinivasan, B.; Marchetti, A.; Bonvin, D.; Ricker, N. Validation of a Solution Model for the Optimization of a Binary Batch Distillation Column, In Proceedings of the 2005 American Control Conference, IEEE, 2005; pp 3127-3132. 10. Welz, C.; Srinivasan, B.; Marchetti, A.; Bonvin, D.; Ricker, N., Evaluation of input parameterization for batch process optimization. AIChE Journal 2006, 52, (9), 3155-3163. 11. Schlegel, M.; Stockmann, K.; Binder, T.; Marquardt, W., Dynamic optimization using adaptive control vector parameterization. Computers & Chemical Engineering 2005, 29, (8), 1731-1751. 12. Aydin, E.; Bonvin, D.; Sundmacher, K., Dynamic Optimization of Constrained Semi-Batch Processes using Pontryagin’s Minimum Principle and Parsimonious Parameterization. Computer Aided Chemical Engineering, Elsevier: 2017; Vol. 40, pp 2041-2046. 13. Vassiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C., Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Industrial & Engineering Chemistry Research 1994, 33, (9), 2123-2133. 14. Schlegel, M.; Marquardt, W., Detection and exploitation of the control switching structure in the solution of dynamic optimization problems. Journal of Process Control 2006, 16, (3), 275-290. 15. Schlegel, M.; Marquardt, W. Direct Sequential Dynamic Optimization with Automatic Switching Structure Detection, In Proc. of DYCOPS, 2004. 16. Fu, J.; Faust, J. M. M.; Chachuat, B.; Mitsos, A., Local optimization of dynamic programs with guaranteed satisfaction of path constraints. Automatica 2015, 62, 184-192. 17. Cervantes, A.; Biegler, L. T., Large‐scale DAE optimization using a simultaneous NLP formulation. AIChE Journal 1998, 44, (5), 1038-1050. 18. Biegler, L. T.; Cervantes, A. M.; Wächter, A., Advances in simultaneous strategies for dynamic process optimization. Chemical Engineering Science 2002, 57, (4), 575-593. 19. Biegler, L. T., An overview of simultaneous strategies for dynamic optimization. Chemical Engineering and Processing: Process Intensification 2007, 46, (11), 1043-1053. 20. Keil, F., Scientific Computing in Chemical Engineering II: Simulation, Image Processing, Optimization, and Control. Springer Science & Business Media: 1999.

34

ACS Paragon Plus Environment

Page 34 of 37

Page 35 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

21. Schäfer, A.; Kühl, P.; Diehl, M.; Schlöder, J.; Bock, H. G., Fast reduced multiple shooting methods for nonlinear model predictive control. Chemical Engineering and Processing: Process Intensification 2007, 46, (11), 1200-1214. 22. Bock, H. G.; Diehl, M. M.; Leineweber, D. B.; Schlöder, J. P., A Direct Multiple Shooting Method for Real-Time Optimization of Nonlinear DAE Processes. In Nonlinear Model Predictive Control, Allgöwer, F.; Zheng, A., Eds. Birkhäuser Basel: Basel, 2000; pp 245-267. 23. Diehl, M.; Bock, H. G.; Schlöder, J. P.; Findeisen, R.; Nagy, Z.; Allgöwer, F., Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control 2002, 12, (4), 577-585. 24. Diehl, M.; Bock, H. G.; Diedam, H.; Wieber, P.-B., Fast Direct Multiple Shooting Algorithms for Optimal Robot Control. In Fast Motions in Biomechanics and Robotics, Springer: 2006; pp 65-93. 25. Findeisen, R.; Allgöwer, F.; Biegler, L. T., Assessment and Future Directions of Nonlinear Model Predictive Control. Springer: 2007; Vol. 358. 26. Cannon, M.; Liao, W.; Kouvaritakis, B., Efficient MPC optimization using Pontryagin's minimum principle. International Journal of Robust and Nonlinear Control 2008, 18, (8), 831-844. 27. Fletcher, R., Practical Methods of Optimization. John Wiley & Sons: 2013. 28. Sager, S., Numerical Methods for Mixed-integer Optimal Control Problems. Der andere Verlag Tönning, Lübeck, Marburg: 2005. 29. Boggs, P. T.; Tolle, J. W., Sequential quadratic programming for large-scale nonlinear optimization. Journal of Computational and Applied Mathematics 2000, 124, (1), 123-137. 30. Goldsmith, M. J., Sequential Quadratic Programming Methods based on Indefinite Hessian Approximations. Stanford University: 1999. 31. Wright, S. J., Primal-dual Interior-point Methods. SIAM: 1997. 32. Wächter, A.; Biegler, L. T., On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming 2006, 106, (1), 25-57. 33. Cannon, M., Efficient nonlinear model predictive control algorithms. Annual Reviews in Control 2004, 28, (2), 229-237. 34. Huang, R.; Zavala, V. M.; Biegler, L. T., Advanced step nonlinear model predictive control for air separation units. Journal of Process Control 2009, 19, (4), 678-685. 35. Zavala, V. M.; Laird, C. D.; Biegler, L. T., A fast moving horizon estimation algorithm based on nonlinear programming sensitivity. Journal of Process Control 2008, 18, (9), 876-884. 36. Jäschke, J.; Yang, X.; Biegler, L. T., Fast economic model predictive control based on NLPsensitivities. Journal of Process Control 2014, 24, (8), 1260-1272. 37. Suwartadi, E.; Kungurtsev, V.; Jäschke, J., Sensitivity-based economic NMPC with a pathfollowing approach. Processes 2017, 5, (1), 8. 38. Wolf, I. J.; Marquardt, W., Fast NMPC schemes for regulatory and economic NMPC–A review. Journal of Process Control 2016, 44, 162-183. 39. Biegler, L.; Yang, X.; Fischer, G., Advances in sensitivity-based nonlinear model predictive control and dynamic real-time optimization. Journal of Process Control 2015, 30, 104-116. 40. Schlegel, M.; Marquardt, W., Adaptive switching structure detection for the solution of dynamic optimization problems. Industrial & Engineering Chemistry Research 2006, 45, (24), 8083-8094. 41. Thomas, S.; Pushpavanam, S.; Seidel-Morgenstern, A., Performance improvements of parallel− series reactions in tubular reactors using reactant dosing concepts. Industrial & Engineering Chemistry Research 2004, 43, (4), 969-979. 42. Jaspan, R.; Coull, J., Trajectory optimization techniques in chemical reaction engineering: I. Boundary condition iteration methods. AIChE Journal 1971, 17, (1), 111-115. 43. Visser, E.; Srinivasan, B.; Palanki, S.; Bonvin, D., A feedback-based implementation scheme for batch process optimization. Journal of Process Control 2000, 10, (5), 399-410.

35

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

44. Kim, N.; Rousseau, A., Sufficient conditions of optimal control based on Pontryagin’s minimum principle for use in hybrid electric vehicles. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 2012, 226, (9), 1160-1170. 45. Palanki, S., Chapter 16: The Application of Pontryagin's Minimum Principle for Endpoint Optimization of Batch Processes. In Control and Optimization with Differential-Algebraic Constraints, pp 327-337. 46. Palanki, S.; Vemuri, J., Optimal operation of semi-batch processes with a single reaction. International Journal of Chemical Reactor Engineering 2005, 3, (1). 47. Ali, U.; Wardi, Y., Multiple shooting technique for optimal control problems with application to power aware networks. IFAC-PapersOnLine 2015, 48, (27), 286-290. 48. Zhang, S.; Xiong, R.; Sun, F., Model predictive control for power management in a plug-in hybrid electric vehicle with a hybrid energy storage system. Applied Energy 2017, 185, Part 2, 1654-1662. 49. Chachuat, B. Nonlinear and Dynamic Optimization: From Theory to Practice, Lecture Notes, 2007. 50. Hannemann, R.; Marquardt, W., Continuous and discrete composite adjoints for the Hessian of the Lagrangian in shooting algorithms for dynamic optimization. SIAM Journal on Scientific Computing 2010, 31, (6), 4675-4695. 51. Hartl, R. F.; Sethi, S. P.; Vickson, R. G., A survey of the maximum principles for optimal control problems with state constraints. SIAM Review 1995, 37, (2), 181-218. 52. Harvey Jr, P. S.; Gavin, H. P.; Scruggs, J. T., Optimal performance of constrained control systems. Smart Materials and Structures 2012, 21, (8), 085001. 53. Hannemann-Tamás, R.; Marquardt, W., How to verify optimal controls computed by direct shooting methods? – A tutorial. Journal of Process Control 2012, 22, (2), 494-507. 54. Wolfe, P., Convergence Conditions for Ascent Methods. II: Some Corrections. SIAM Review 1971, 13, (2), 185-188. 55. Andersson, J.; Diehl, M. Dynamic Optimization with CasADi, In Proc. of IEEE Conference on Decision and Control (CDC), 2012; pp 681-686. 56. Hentschel, B.; Kiedorf, G.; Gerlach, M.; Hamel, C.; Seidel-Morgenstern, A.; Freund, H.; Sundmacher, K., Model-based identification and experimental validation of the optimal reaction route for the hydroformylation of 1-dodecene. Industrial & Engineering Chemistry Research 2015, 54, (6), 17551765.

36

ACS Paragon Plus Environment

Page 36 of 37

Page 37 of 37 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

TOC Abstract:

37

ACS Paragon Plus Environment