Reply to “Some Observations on the Paper 'Optimal Experimental

AVT−Process Systems Engineering, RWTH Aachen University, Turmstrasse 46, D-52064 Aachen, Germany. Ind. Eng. Chem. Res. , 2010, 49 (19), pp 9563– ...
0 downloads 0 Views 38KB Size
Ind. Eng. Chem. Res. 2010, 49, 9563–9564

9563

Reply to “Some Observations on the Paper ‘Optimal Experimental Design for Discriminating Numerous Model CandidatessThe AWDC Criterion’ ” Claas Michalik, Maxim Stuckert, and Wolfgang Marquardt* AVT-Process Systems Engineering, RWTH Aachen UniVersity, Turmstrasse 46, D-52064 Aachen, Germany In his letter to the Editor, Guido Buzzi-Ferraris raises several issues about the above-mentioned paper. We agree in parts to his critique and apologize for not having cited his recent work.1 However, we would like to respond to the major points raised in his letter in this rebuttal. In his first statement, Buzzi-Ferraris claims that we falsely state that the so-called B and T criteria proposed in his previous work are unable to discriminate among more than two competing models. He emphasizes that his criteria are even able to discriminate among multiple models. In fact, we disagree with this statement, since we do not claim that either the T or the B criterion is not able to discriminate among multiple models. We even use such criteria for exactly this task in our paper to compare classical and well-established model selection criteria to the novel AWDC criterion. We do, however, state that these and similar criteria are not well suited for the discrimination between a larger number of model candidates, due to their nature. These classical criteria build on a (weighted) squared model prediction difference. They mainly differ in the way the weighting factors are calculated. Because of the use of the square of the model prediction differences, model lumping is likely to occur, as shown in our paper by means of a very simple example. Nevertheless, we agree that these classical criteria can be used to successfully discriminate among multiple models and that the use of any such criteria generally leads to a great improvement compared to unplanned experiments. In his second statement, Buzzi-Ferraris claims that we misinterpret the spirit of the criteria he originally suggested by incorporating model probabilities. We agree with Buzzi-Ferraris on this point and apologize for using the criteria in this modified form. However, we reran the case study of our paper using the original criterion by Buzzi-Ferraris and two other classical criteria as well as the new AWDC criterion. The following classical design criteria (CDC) were used: CDC1:2 m-1

max D(e) ) e

∑ ∑ [y (Θ , e) - y (Θ , e)] i

m-1 e

i

j

2

(1)

j

i)1 j)i+1

CDC2:3 max D(e) )

m

m

∑ ∑ pp

[

i j

i)1 j)i+1

(yi - yj)2

(

(σi2 - σj2)2 (σmeas2 + σi2)(σmeas2 + σj2)

1 1 + σmeas2 + σi2 σmeas2 + σj2

+

)]

(2)

and CDC3:4 m-1

max Dclassical(e) ) e

m

∑ ∑ (y

i

- yj)TVi,j-1(yi - yj) +

mental design. The mean number of optimally planned experiments necessary using the different criteria is given in Table 1. It can be seen that the three classical criteria perform almost equally well, although CDC2 uses model probabilities whereas CDC1 and CDC3 do not. We also reran the case study with the AWDC criterion using the same settings reported in our paper and got exactly the same results as before. This is reasonable, since the model probabilities have been equal for all remaining models during all iterations in the original case study, such that removing them has no influence on the designed experiments but only on the final value of the objective function. It is important to mention, that the classical criteria may perform better and our criterion may perform worse if classical statistical tests such as Student‘s t test are applied for model selection. In the following point, Buzzi-Ferraris argues about potential problems associated with the use of model probabilities in a similar spirit as in one of his recent papers.1 We do not want to argue whether the use of model probabilities is reasonable or not. There are many publications favoring model probabilities (see for instance ref 5) and there are many others favoring classical statistical tests (see for instance ref 1), and both parties have good arguments. Therefore, we leave it to the reader to decide for one of the two philosophies. Buzzi-Ferraris further wonders what happens if the AWDC criterion is used and no good model is in the candidate set, since the criterion requires a good candidate to be present in the set. The work process that we propose in our paper also covers this unfavorabale situation. One step in the procedure is the update of the candidate set, which allows for both, removing unsuited model candidates and adding novel model candidates that seem likely in the light of the measurement data. In his last major comment, Buzzi-Ferraris criticizes the case study solved to demonstrate the benefits of the novel AWDC criterion. His critique focuses on the proposed model candidates which are taken from our previous work.6 We agree that some of the proposed rate laws could be dismissed without using any experiments. However, we think that candidate model sets that include unreasonable model candidates are not uncommon, especially if the candidates are automatically created by a software tool (as also mentioned in our paper). In addition, in a more complex setting, unreasonable model candidates may not be identified as easily, such that any criterion for model based optimal experimental design for model discrimination should also be able to cope with unreasonable model candidates. We also would like to mention that further investigations on the AWDC criterion are under way. These investigations also

i)1 j)i+1

trace(2VmeasVi,j-1)

(3)

We ran the case study as described in the paper 100 times using randomly chosen initial guesses for the optimal experi* To whom correspondence should be addressed. E-mail: [email protected].

Table 1. Mean Number of Optimally Planned Experiments Necessary in the Case Study Using the Classical Design Criteria and the Novel AWDC for 100 Test Runs Using Randomly Chosen Initial Guesses MBOED-MD criterion

CDC1

CDC2

CDC3

AWDC

mean number of expts

2.97

2.88

2.97

1.1

10.1021/ie1015574  2010 American Chemical Society Published on Web 08/11/2010

9564

Ind. Eng. Chem. Res., Vol. 49, No. 19, 2010

contain new and more complex case studies and will be published in the future. Finally, we would like to mention that we highly respect the work by Buzzi-Ferraris on optimal experimental design and do not claim that our criterion is preferable in every setting. However, we strongly believe that in the case of a high number of model candidates that show significant differences (as for instance if model candidates are automatically proposed by a specific software) the novel AWDC criterion will on average perform better than criteria that are based on maximizing the (weighted) model prediction differences. However, only the application of the criterion to many more realistic case studies can build up the evidence to justify this or any contrary claim. Therefore, in order to ease the application of the AWDC criterion we provide a link to all MATLAB files necessary to run the case study presented in the original paper.

Literature Cited (1) Buzzi-Ferrairs, G.; Manenti, F. Kinetic models analysis. Chem. Eng. Sci. 2009, 64 (5), 1061–1074. (2) Froment, G. Model discrimination and parameter estimation in heterogeneous catalysis. AIChE J. 1975, 1041–1056. (3) Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. (4) Buzzi-Ferraris, g.; Forzatti, P. An improved version of a sequential design criterion for discriminating among rival multiresponse models. Chem. Eng. Sci. 1990, 45 (2), 477–481. (5) Burnham, K. P.; Anderson, D. R. Multimodel inference: Understanding AIC and BIC in model selection. Sociol. Methods Res. 2004, 33, 261– 304. (6) Brendel, M.; Bonvin, D.; arquardt, W. Incremental identification of complex reaction kinetics in homogeneous systems. Chem. Eng. Sci. 2006, 61, 5404–5420.

IE1015574