Online Data Reconciliation with Poor Redundancy ... - ACS Publications

Purchase temporary access to this content. ACS Members purchase additional access options · Ask your library to provide you and your colleagues site-w...
0 downloads 3 Views 3MB Size
ARTICLE pubs.acs.org/IECR

Online Data Reconciliation with Poor Redundancy Systems Flavio Manenti,* Maria Grazia Grottoli, and Sauro Pierucci CMIC Department “Giulio Natta”, Politecnico di Milano Piazza Leonardo da Vinci 32, I-20133 Milano, Italy ABSTRACT: The paper deals with the integrated solution of different model-based optimization levels to face the problem of inferring and reconciling online plant measurements practically, under the condition of poor measure redundancy, because of a lack of instrumentation installed in the field. The novelty of the proposed computer-aided process engineering (CAPE) solution is in the simultaneous integration of different optimization levels: (i) the data reconciliation based on a detailed process simulation; (ii) the introduction and estimation of certain adaptive parameters, to match the current process conditions as well as to confer a certain generality on it; and (iii) the use of a set of efficient optimizers to improve plant operations. The online feasibility of the proposed CAPE solution is validated on a large-scale sulfur recovery unit (SRU) of an oil refinery.

1. INTRODUCTION Process data reconciliation and its benefits have been widely studied in the literature for a long time,13 and, nowadays, research groups are focusing their activities on the dynamic data reconciliation and on the search for robustness to determine gross errors.49 Nevertheless, beyond these and other open issues, it is not immediate to realize why the basic data reconciliation is not yet used everywhere in process industry.10 It is worth making a premise to understand it. Many industrial plants and processes within oil refineries were constructed with the objective of producing the maximum amount of chemical products and commodities, without accounting for the need to reduce emissions, save energy and resources, and safeguard the environment. Certain refinery processes, above all of the ones that do not increase the net present value of the plant, have been designed, engineered, and constructed to reduce the instrumentation costs. A typical example is the case of sulfur recovery units (SRUs), which do not directly increase the net profit margin of the oil refinery (the elemental sulfur has a low market price11), but they are becoming the key point to handle the pollutant emissions at the stacks, especially looking forward to the more and more stringent environmental regulations of several industrialized countries. These processes are characterized by very poor instrumentation, such that, even though it seems to be unbelievable nowadays, the same control and field operators do not really know the effective plant operating conditions. Clearly, this situation hinders the application of any type of advanced solutions to manage and optimize operations, starting from the data reconciliation to the model predictive control and the real-time optimization with all their related benefits.1214 The present paper provides a possible computer-aided process engineering (CAPE) solution to bridge the current gap between the theory and industrial practice, without the need to install any additional instrumentation. The key point is the integration of different optimization levels to increase the level of redundancy of the overall system by means of a detailed model, to adapt the model to the current plant conditions by estimating certain selected parameters, to optimize the plant performances, based on the inferred and reconciled data (sections 2 and 3). The proposed CAPE solution has been proven on a distributed control system (DCS) of an operating SRU to check its r 2011 American Chemical Society

online feasibility and its effectiveness to promptly provide a coherent picture of the plant operating conditions also when the few raw data available are affected by gross errors.

2. ARCHITECTURE OF THE CAPE SOLUTION Interactive software, hardware, and information technology systems are qualitatively represented in Figure 1. Certain reconciled datasets are acquired from the historical server of the DCS and used to initialize the so-called “Error-In-Variable” method1518 (EVM, described later), which is an optimization problem aimed at estimating the adaptive parameters and, hence, adapting the detailed process simulation to the current plant operating conditions. The adaptive parameters are used to simulate the process and reproduce a reliable and coherent picture of the operating conditions; in this case, the data used to initialize the process simulation are the fresh data coming from the field (not yet reconciled). The adapted process simulation is used as the basis to solve the optimization problem of data reconciliation. The use of commercial simulators forces us to adopt a sequential strategy to solve the data reconciliation. With sequential strategy, we mean that the process simulation is solved iteratively, at each step of the optimizer within the reconciler. Such an approach is numerically similar to the feasible path (or partially discretized) approach adopted in the dynamic optimization.19,20 Since there is the need to find the possible gross errors that affect the dataset, certain very robust methodologies must be used to accomplish this step, with the possible problem of heavy computational times. Once the fresh dataset has been reconciled, it is collected on the historical database and made available for the possible next executions of the EVM method and re-estimation of the adaptive parameters. Now, with a reconciled dataset, it is possible to perform an effective plant optimization, called the real-time optimization (RTO); in this case, efficient optimizers are needed, since possible gross errors have been already removed by robust optimizers by the robust methodologies. Received: May 5, 2011 Accepted: November 10, 2011 Revised: November 1, 2011 Published: November 10, 2011 14105

dx.doi.org/10.1021/ie202259b | Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research

ARTICLE

Figure 1. Hardwaresoftware architecture of the proposed computer-aided process engineering (CAPE) solution.

2.1. Detailed Process Simulation. From a practical point of

view, the traditional approach is to develop a detailed simulation to properly infer all the missing measures. Since SRUs usually involve a thermal reaction furnace, together with a series of catalytic (Claus) converters, complex kinetic schemes coupled with computational fluid-dynamics (CFD) studies could be adopted in their modeling. Unfortunately, a CFD approach is well-known to be computationally heavy and clearly ineffective for online applications (section 3), thus, certain specific and wellestablished correlations are adopted to reasonably characterize the behavior of these complex reactors (see Signor et al.11). Correlations have been fully integrated in the commercial process simulator PRO/II21 by Simsci-Esscor/Invensys; hence, PRO/II has been used to model all of the remaining unit operations and the ancillary equipment. Nevertheless, a detailed simulation is not enough to properly reconcile raw datasets since the current conditions of the real plant may be far from the ideal operating conditions of the process simulation that, however detailed, usually cannot account for certain phenomena (fouling, cleanliness, efficiency, etc.). Thus, there is the need to introduce the adaptive parameters and estimate them to match the real plant conditions and follow its medium- and long-term evolution. From a more general point of view, these adaptive parameters may be used to adapt the process simulation not only to different operating conditions, but also to different plants with similar layouts. At last, there is the need to couple the detailed and adaptive simulation with the most appealing numerical libraries to avail from both robust and efficient optimizers. Robust optimizers are essential to identify gross errors within the datasets coming from the field/DCS and, if possible, to correct them; efficient solvers are required to accelerate the overall procedure and ensure its online effectiveness of the plant optimization based on the reconciled and inferred measures. The complexity of the described solution is qualitatively reported in Figure 1. Some authors22 underlined that optimization problems subject to process model constraints are better solved by means of simultaneous strategies, whereas the use of a commercial package as PRO/II forces to adopt sequential strategies.19,20 Nevertheless,

as recently discussed by Signor et al.,11 the wide diffusion of the commercial process simulators in the process industry may take to a series of benefits: • The mathematical model can cover each level of detail, according to the model libraries proposed by the commercial simulator. The selection of degree of detail should be a good compromise between the process characterization and the computational effort. • Some consolidated solutions, especially dictated by the practical experience and already implemented in the most common commercial process simulators, could be successfully involved in the solution of the data reconciliation problems. • When using a commercial simulator that is even adopted for process design, the data reconciliation may have a feedback on both the instrumentation and the same process design (plant debottlenecking, revamping, etc.). • Engineering societies and production sites usually have the licenses of process simulators (no additional charges for software licenses). • Last, but not least, the wide diffusion of commercial simulators in process industries could confer the proposed approach a certain generality at least for SRUs having similar layouts (almost 80% of the SRUs operating worldwide). 2.2. Optimization Levels. Three different optimization levels of the process control hierarchy must be solved and coupled with the aforementioned detailed process simulation when we undergo the condition of poor instrumentation installed by the field: data reconciliation, adaptive parameter estimation, and economic plant optimization based on reliable data. 2.2.1. Data Reconciliation. Data reconciliation is a useful tool to reconcile the measures to fulfill material and energy balances of every process unit and plant subsection characterized by an adequate measurement redundancy. The idea of reconciling measures is quite old and can be brought back to the early 1950s. Nevertheless, the recent implementation of advanced process control2325 and optimization techniques, as well as the advances in the information technology, which is the fundamental support to the enterprise resource planning and the 14106

dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research

ARTICLE

decision-making process,26,27 are forcing the process industries to be interested in performance and the use of robust tools for process data analysis and reconciliation.28,29 To provide a coherent picture of the plant, the objective is to minimize a function F, which is usually the weighed sum-of-squares residuals between measured and reconciled variable values: min F ¼ x

∑i μi ðmi  xi Þ2

ð1Þ

subject to gðxÞ ¼ 0 hðxÞ e 0 where g(x) = 0 and h(x) e 0 are equality and inequality constraints (model equations of the process); μi is the weight vector, usually the inverse of the variance or standard deviation; xi is the reconciled value; and mi is the measured value. Linear data reconciliation is usually employed in facilities and utilities for steam generation, where the only component is water (other components are practically negligible). These processes, typical of the power field and of the facilities and utilities processes of the oil and gas field, do not require any in-line analyzer to measure molar compositions. By chance, the overall mass (and energy) balances are sufficient to characterize the process. Apart from a stream compensation that has the objective to equalize the vapor flow rates with the liquid ones, the data reconciliation problem can be easily defined as follows:  2 μi wi, meas  wi, rec min F ¼ ð2Þ wrec

∑i

subject to

currently possible in this case, except for the data reconciliation of certain specific subsections that locally agree with one of the previous points. It is worth highlighting that a feasible reconciliation can be guaranteed by ensuring an adequate measurement distribution by the field and the development of an appropriate process control scheme. Actually, even though the overall DOR > 0, the data reconciliation could be infeasible for certain plant subsections; analogously, when DOR < 0, certain plant subsections could be however reconciled. In other words, a positive DOR is a necessary, but not sufficient, condition for data reconciliation. Contrary to utilities and steam generation processes, with multicomponent process flow rates, the data reconciliation problem is extended from the linear case to the bilinear case. Actually, it is necessary to reconcile not only the overall material flow rate, but also the mass component rate. It unavoidably requires certain in-line analyzers, besides the flow, pressure, and temperature measures. The data reconciliation problem assumes the following form:  2 μi ni, meas 3 wi, meas  ni, rec 3 wi, rec min F ¼ ð4Þ w rec , nrec

∑i

subject to g ðnrec , w rec Þ ¼ 0 hðnrec , w rec Þ e 0 According to several authors,1,3 it is useful to keep the problem linear, when possible, from a computational point of view. Thus, introducing the overall component flow rate (Ni = ni 3 wi), it is possible to write the following bilinear objective function: min F ¼

g ðw rec Þ ¼ 0 hðw rec Þ e 0

w rec , Nrec

ð5Þ

where wi is the mass flow rate. Depending on the redundancy, or the degree of redundancy (DOR), which is defined as DOR ¼ equations þ measures  reconciled

∑i μi ðwi, meas  wi, recÞ2 þ ∑i νi ðNi, meas  Ni, rec Þ2

subject to g ðNrec , w rec Þ ¼ 0 hðNrec , w rec Þ e 0

ð3Þ

five different situations can (globally or locally) occur: • The total amount of measures and equations is significantly larger than the number of process flow rates. The data reconciliation can be regularly carried out. • The total amount of measures and equations is slightly larger than the number of process flow rates. In such a condition of reduced redundancy, the data reconciliation can be regularly carried out, even if it may be difficult to detect possible outliers.30,31 • The total amount of measures and equations is equal to the number of process flow rates. The reconciliation becomes critical since the presence of one outlier makes the reconciliation infeasible and may lead to the so-called “masking” and “swamping” effects.7,9 However, the data reconciliation can be still carried out. • The total amount of measures and equations is slightly smaller than the number of process flow rates (and subject to certain conditions not analyzed here). The reconciliation problem is transformed to a coaptation problem. Under the assumption of total absence of outliers, the missing data can generally be evaluated. • The total amount of measures and equations is significantly smaller than the number of process flow rates. No actions are

where νi is a weight vector. The formulation described by eq 5 allows one to overcome the initial nonlinearities of the objective function, by means of specific data pre- and post-processing. The same procedure can be followed, for example, in the simultaneous solution of mass and energy balances:  2 μi wi, meas 3 cp 3 Ti, meas  wi, rec 3 cp 3 Ti, rec min F ¼ wrec , Trec

∑i

ð6Þ subject to g ðTrec , w rec Þ ¼ 0 hðTrec , w rec Þ e 0 ~ i = Hi/wi. The by introducing Hi = wi 3 cp 3 Ti and, therefore, H reconciliation problem becomes min F ¼

w rec , Hrec

∑i μi



wi, meas  wi, rec

2

þ

∑i νi



~ i, rec ~ i, meas  H H

2

ð7Þ 14107

dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research

ARTICLE

subject to g ðTrec , w rec Þ ¼ 0 hðTrec , w rec Þ e 0 Sometimes, certain problems require the simultaneous reconciliation of overall mass, component, and energy balances. Through the aforementioned devices, it is possible to transform the original problem to a multilinear data reconciliation. Theoretically, there are no limits to the number of linear terms. Contrary to the aforementioned techniques, nonlinear data reconciliation is fast acquiring interest in process industry, especially since the coupling of a reconciliation tool with an existing process simulation package may lead to several advantages. First of all, there is the possibility to base the entire reconciliation on detailed mathematical models, which go beyond the basic mass and energy balances. It may strongly increase the data reconciliation accuracy and significantly support the detection of gross errors and masked outliers. The formulation of nonlinear data reconciliation problems corresponds to the aforementioned cases, with a relevant difference in the constraints: physical-chemical properties, thermodynamic, equilibrium, hydraulic relations, etc., all deriving from a detailed process modeling, are introduced as additional (and nonlinear) constraints. For example, let us consider a depropanizer column with three flow measures on the feed, bottom, and distillate streams, respectively. To obtain an effective reconciliation, it is surely preferable to solve the mass reconciliation problem by implementing the entire physical-chemical model of the column rather than basing our results only on the overall mass balance. The computational effort increases to solve the model-based (nonlinear) problem, but, considering the industrial clock requirements (i.e., one data reconciliation per hour), the current computational power and existing algorithms ensure faster solutions of nonlinear reconciliation problems by making their online implementability feasible with very large CPU margins (see section 3). 2.2.2. Adaptive Parameter Estimation. The second problem to be solved is the so-called “Error-in-Variable” method (EVM), originally proposed by Biegler, to whom one could refer for more details.16,17,32 Briefly, it is formulated as follows: min Φ ¼ xi , θ

SSC

ðmi  x i ÞT Q 1 ðmi  xi Þ ∑ i¼1

ð8Þ

subject to f ðxi , θÞ ¼ 0 where Φ is the is the objective function; x and m are the vectors of reconciled and measured values, respectively; Q is the positive definite diagonal matrix of weights; g(x) = 0 and h(x) e 0 are equality and inequality constraints, respectively, to which the minimization problem is subjected. The key point of EVM is its large-size dimension, with respect to the data reconciliation, since its degrees of freedom are the adaptive parameters θ and the reconciled vectors of each steady-state condition (SSC) acquired by the historical server of the DCS. 2.2.3. Economic Real-Time Optimization . The third optimization problem is of economic nature. The reliable and coherent picture of how the process is really operating can be exploited to implement certain production policies to improve, for example, the process yield, the process unit efficiency, or the energy saving. It is needless to say that every economic optimization is ineffective

whenever the process data are not properly reconciled, since the effects of a small error/inconsistency in process data are strongly widened in the decision-making process and in the implementation of any type of economic policy. A general formulation of the economic optimization level is as follows: min

x, y ∈ R;b, n ∈ N

Φ¼

Nplants

Nprocesses Nunits

∑ ::: j∑¼ 1 k∑¼ 1 REVENUESðx, y, b, nÞi, :::, j, k i¼1

 COSTSðx, y, b, nÞi, :::, j, k

ð9Þ

subject to f ðx, y, b, nÞ ¼ 0 gðx, y, b, nÞ g 0 where REVENUES and COSTS are related to each process unit of each process, of each plant, of each production site. The degrees of freedom of the economic optimization are continuous variables (x, y) (e.g., the throughput), but it is also possible to enter discrete (Boolean (b) and integer (n)) variables that involve the decision-making process (e.g., the ith process is on (bi = 1) or off (bi = 0)). Often, the economic optimization requires very efficient optimizers to ensure its real-time application and, hence, its effectiveness. Certain moving horizon and rolling horizon methodologies have been defined and applied by the field.12,33 2.3. Object-Oriented Programming. The use of adaptive simulation to address the parameters estimation and the data reconciliation is described in the previous work by Signor et al.,11 which is the fundamental basis for the present research activity. The novelty of the CAPE solution proposed in this work is the full integration (see Figure 1) of all the solvers for the adaptive simulation-based optimization problems mentioned above into a single, conscious algorithm. This is possible by exploiting certain features of the object-oriented programming such as the masking, encapsulation, polymorphism, and inheritance. Before starting the discussion, it is worth emphasizing that all the constraints of the problem are managed as a black box, since we are using a commercial package to simulate the SRU. This means that, although with several important checks, we are forced to consider the optimization problems as a type of unconstrained optimization, where the constraints are separately solved (by means of the nonlinear system solver of the same commercial package). The important checks are especially related to the convergence of the procedure, where several discontinuities coming from the black box of constraints may lead to strong multimodalities.34 Nevertheless, the separation of the constraints from the convergence of the optimal problems makes feasible the combination of different optimizers into a single conscious optimizer, which is able to self-manage its convergence path and become either more robust or more efficient, according to the specific situation, also exploiting the parallel computing, if available. In fact, the consciousness of the integrated optimizer allows one to identify the number of available processors for shared-memory machines automatically and send the calculations there, to either accelerate the convergence or improve the robustness accordingly. This is particularly important if we think that the data reconciliation will become automatically more and more robust, based on the number of gross errors detected by the numerical methods designed to identify them (section 2.4), whereas the EVM and plant optimization becomes more and more efficient 14108

dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research

ARTICLE

while the convergence is approaching. Nevertheless, it is important to emphasize that the parallel computing is conscious for openMP directives only and MPI directives are not considered. At last, it is worth noting that, in turn, all the single entities included in the integrated CAPE solution have implemented several numerical methods, which are automatically managed to improve the solution of the specific problem. The tests on the computational efforts described in section 3 are performed on a single core (disabled parallel computing), for the sake of clarity. Thus, rather than using three different objects coming from three different C++ classes to solve, respectively, the data reconciliation, the parameters estimation, and the plant optimization, it is possible to exploit the features of C++ and to develop a single object managing all the optimization problems at best. From a mathematical point of view, it corresponds to merge the three aforementioned optimization issues into a single global optimization with the summation of all of the degrees of freedom involved in the original problems. This is possible especially for three reasons: (i) the C++ polymorphism allows to merge the optimizers in a single C++ class; (ii) the C++ inheritance allows one to preserve their single features and make them available when the specific situation requires them; and (iii) the constraints are invoked by the global solver when needed and the number of times usually is proportional to the robustness of the system. More details on the conscious approach using the objectoriented programming are reported elsewhere.12 2.4. Numerical Methods. To solve the aforementioned optimization problems, it is necessary to use an appropriate combination of tools: we need the robust optimizers and observers, to accomplished the data reconciliation; efficient optimizers, to accomplish the large-scale problem of parameters estimation through the EVM; and efficient optimizers, to be effective in optimizing the plant performances. For parameters estimation and optimization of plant performances, we adopted the optimizers of the BzzMath library in their parallel computing release, already explained in detail elsewhere.9,12,34,35 Conversely, at the data reconciliation level, the gross errors and bad quality data coming from the field must be effectively identified and, if possible, revised online. To do so, a new family of algorithms is adopted: the so-called clever mean and clever variance methods are implemented to check if the single value of each measure is good or not and, hence, to have a filter on possible gross errors. The general problem broached here is the detection of gross errors in a set of n of experimental points yi (i = 1, ..., n) of the population Y with n being very large (typical of industrial cases). We adopt a novel robust criterion that has the same efficiency of mean evaluation, but also the same robustness of median-based observers.6,7 Their estimation is quite trivial; when the dataset is being read, the following quantities are calculated: sum ¼

sq ¼

n



i¼1

denote the zeroth-order clever mean and n

cv 0 ¼

n

∑ y2i i¼1

cm1 ¼

n

cv 1 ¼

∑ yi  y1 i¼1

ð14Þ

n1

 ðyi  cm1 Þ2  ðy1  cm1 Þ2 ∑ i¼1

n2



¼

sq þ n 3 ðcm1 Þ2  2 3 cm1 3 sum  ðy1  cm1 Þ2 n2

ð15Þ

If pffiffiffiffiffiffi  jcm1  y1 j > δ 3 cv 1

ð16Þ

where δ is a threshold value (i.e., 2.5), the experimental parameter y/1 can be considered to be an outlier and the values of cm1 and cv1 are estimations of the first-order clever mean and clever variance, respectively. If an outlier exists, the procedure is iterated: a new possible outlier y/2 is selected and cm2 and cv2 are both calculated by also simulating the removal of this new point y/2 ; if the elimination of this point satisfies the relation pffiffiffiffiffiffi  jcm2  y2 j > δ 3 cv 2 ð17Þ it also must be considered to be an outlier. The procedure goes on until y/k satisfies the condition described by eq 17: pffiffiffiffiffiffi jcmk  yk j > δ 3 cv k ð18Þ when its removal is simulated, whereas yk+1/ does not: pffiffiffiffiffiffiffiffiffiffi  jcmkþ1  yk þ 1 j < δ 3 cv kþ1

ð19Þ

Please note the following: • The selection of a possible outlier y/k , is very simple indeed: it is whichever one that minimizes cvk between the two observations that currently represent the minimum and maximum values after the removal of previous outliers. • The clever mean (cm) might maintain its value while outliers are progressively removed for two reasons: when the number of data is particularly large, the arithmetic mean can change slightly, even though an outlier is removed; by removing two outliers that are symmetric, with respect to the expected value, the clever mean remains unchanged. It would be an error to check for outliers only by looking at the value of the clever mean.

ð11Þ

¼ y̅

ð13Þ

n

n

cm0 ¼

¼ s2

n

and a predetermined number of maximum and minimum values is collected. Let

∑ yi i¼1

n1

the zeroth-order clever variance. Assuming y/1 to be the first possible outlier, and removing that value using the mean and variance, results in

ð10Þ

yi

ðyi  cm0 Þ2 ∑ i¼1

ð12Þ 14109

dx.doi.org/10.1021/ie202259b |Ind. Eng. Chem. Res. 2011, 50, 14105–14114

Industrial & Engineering Chemistry Research • The clever variance (cv), on the other hand, has a monotonous decreasing trend while outliers are gradually removed; it is, moreover, practically unvaried or increases further when the observation removed is not a real outlier. • If the clever variance does not increase when the observation y/k is removed and if y/k satisfies the relation described by eq 18, y/k is an outlier. 2.5. Hardware. The program must be completely interfaced with the DCS to get both the data from the plant historical database and the fresh raw data acquired by the field to be reconciled. This is not a problem since, nowadays, the majority of the DCS includes an application server and an OPC (OLE for process control) interface. The former one is a server just dedicated to all the external tools that must continuously interact with the DCS for input and output signals; the latter one is a set of directives to transfer input/output signals. Also, DCS are managed by information technology systems able to connect all tags to each type of external package (e.g., PI by OSIsoft). In the specific case that we are analyzing (Figure 2), the measures are sent to the junction boxes placed in the field and then are sent to the raw data server. These data, together with certain data of the historian server, are sent to the application server, where an OPC server is installed to allow the communications to and from the client level of the DCS. The CAPE solution that solves the data reconciliation, the adaptive parameter estimation, and the real-time optimization is within the client level and dialogues with the DCS by means of the OPC client. Once the adaptive parameters θ have been evaluated by means of the EVM, initialized by the sets of steady-state conditions coming from the historian server, the adapted process simulation can be run by keeping the adaptive parameters constant and starting from the current (fresh) raw dataset coming from the raw data server. The adapted process simulation is iteratively called by the data reconciliation routine to properly detect possible gross errors affecting the fresh raw dataset. Next, the economic optimization can be performed, basing on the reconciled data. Finally, the reconciled data are sent back to the DCS via OPC and are collected into the historian server. The procedure is iterated. Since the proposed approach allows one to remove possible gross errors and, hence, obtain a coherent picture of the plant, there is the possibility to exploit efficient algorithms to perform the real-time optimization and, hence, evaluate the optimal plant conditions according to the current specifications and plant performances. In such a case, the possible actions to optimize the operations are sent back to the field and implemented by the control system.

3. ONLINE FEASIBILITY To verify the online effectiveness of the proposed approach, it is important to compare the optimization problems involved with the levels of the process control hierarchy.26,33 (See Figure 3.) The machine that we adopted to measure the computational effort is an INTEL CORE 2 QUAD CPU (2.83 GHz, 3 GB of RAM, operative system MS WINDOWS 2003, and compiler MS VISUAL STUDIO 2008). Solvers and optimizers of the BzzMath library (version 6.0),36 are used for data reconciliation, EVM, and plant optimization. The process simulation requires no more than 10 s. It is iteratively invoked by the optimizer of the data reconciliation procedure. The most expensive simulations are the initial ones,

ARTICLE

Figure 2. From the field to the CAPE solution and feedback on the operations; solid lines are information flows, and dashed lines are decision/action flows.

where many states are significantly changed when the data reconciliation receives the fresh dataset. Data reconciliation is accomplished within no more than 12 min. The plant optimization can be computationally performed within