Oscillation Detection in Process Industries by a ... - ACS Publications

Jul 5, 2019 - oscillations may be present in the same time series. Despite this, ... methods apply the detection directly to a set of loops/variables...
0 downloads 0 Views 5MB Size
Article Cite This: Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

pubs.acs.org/IECR

Oscillation Detection in Process Industries by a Machine LearningBased Approach Jônathan W. V. Dambros,*,†,‡ Jorge O. Trierweiler,† Marcelo Farenzena,† and Marius Kloft‡,§ †

Downloaded via BUFFALO STATE on July 19, 2019 at 20:47:53 (UTC). See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.

Department of Chemical Engineering, Federal University of Rio Grande do Sul, R. Eng. Luiz Englert, s/n, Campus Central, Porto Alegre, Rio Grande do Sul, Brazil ‡ Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany § Department of Computer Science, University of Southern California, 1002 Childs Way, Los Angeles, California, United States S Supporting Information *

ABSTRACT: Oscillatory control loop is a frequent problem in process industries. Its incidence highly degrades the plant profitability, which means oscillation detection and removal is fundamental. For detection, many automatic techniques have been proposed. These are usually based on rules compiled into an algorithm. For industrial application, in which the time series have very distinct properties and are subject to interferences such as noise and disturbances, the algorithm must include rules covering all possible time series structures. Since the development of this algorithm is near impractical, it is reasonable to say that current rule-based techniques are subject to incorrect detection. This work presents a machine learning-based approach for automatic oscillation detection in process industries. Rather than being rule-based, the technique learns the features of oscillatory and nonoscillatory loops by examples. A model based on deep feedforward network is trained with artificial data for oscillation detection. Additionally, two other models are trained for the quantification of the number of periods and oscillation amplitude. The evaluation of the technique using industrial data with different features reveals its robustness.

1. INTRODUCTION Oscillation in process industries is a common problem that affects between 30% and 41% of the control loops.1−3 The removal of such oscillatory loops is of great interest since a decrease in variability means that the process variables are held closer to their desired conditions, resulting in financial benefits. The first step in oscillation removal is its detection. This task can be driven by individual visual inspection of each time series. Unfortunately, this approach is unfeasible when a full-plant diagnosis is required. Typically, process industries have between 500 and 5000 control loops.4 Visual inspection would consume the full personal resources and limit the investigation to part of these loops,5 resulting in unnoticed oscillations.6 To overcome this limitation, automatic oscillation detection techniques are required. Over the last 25 years, researchers have been working on automatic methods. The technique proposed by Hägglund7 computes the integral absolute error (IAE) for each segment limited by zero crossings. If the IAE value is larger than a certain © XXXX American Chemical Society

threshold, a counter is increased; if the counter exceeds a given value, the presence of oscillation is confirmed. Miao and Seborg8 developed a method that evaluates the decay ratio in the autocorrelation function. If the decay is higher than a certain threshold, oscillation is detected. Thornhill et al.9 proposed a method in which the regularity of the period of oscillation in the autocorrelation function is evaluated. If the period is regular, oscillation is detected. These three methods are a brief overview of more than 30 different approaches (some of which will be presented in the following section). The techniques are usually data-driven methods based on rules founded on criteria similar to those used in visual inspection and/or mathematical concepts. These rules are usually the computation of a parameter (IAE, decay ratio, Received: Revised: Accepted: Published: A

March 15, 2019 June 24, 2019 July 5, 2019 July 5, 2019 DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

oscillation detection (STSOD) techniques, in which the focus is the detection in individual loops/variables, and plant-wide oscillation detection (PWOD) techniques, in which the methods apply the detection directly to a set of loops/variables. Among the PWOD techniques, the method proposed by Thornhill et al.17,18 is highlighted. The technique decomposes the spectrum of the set in basis, each corresponding to a dominant oscillation. Then, each spectrum is compared to each basis for the grouping of time series with similar features. STSOD techniques are subclassified into time domain, ACFbased, frequency domain, continuous wavelet transform (CWD), and decomposition methods. Time domain methods are usually simple. In this group is included the method proposed by Hägglund,7 reviewed in section 1. Also is included the technique proposed by Forsman and Stattin,19 which evaluates both the IAE magnitude and the interval between zero crossings. If both values are kept close to constant over time, oscillation is detected. Zakharov and co-workers20,21 designed an algorithm that evaluates the correlation between two periods of oscillation. If the correlation is high, oscillation is detected. Since noise is a frequent problem for time domain methods, it is convenient to transform the time series into the ACF domain, where the resulting signal has the same frequency of oscillation with attenuated noise. This strategy forms the second group named ACF-based, which includes the works by Miao and Seborg8 and Thornhill et al.9 In most cases, the oscillatory time series can easily be detected visually by a peak in the f requency domain. The technique proposed by Zhang et al.22 tries to capture this peak by setting a threshold. If the magnitude of any component in the frequency domain crosses the threshold, oscillation is detected. Matsuo and co-workers23−25 presented a series of works in which continuous wavelet transforms (CWT) are applied. Oscillation is detected by the identification of high values in the wavelet plot by visual inspection. This approach is similar to the visual detection in the frequency domain with the advantage of time information on the oscillation occurrence. Empirical mode decomposition (EMD) is a technique that decomposes the time series into components of different frequencies. Srinivasan et al.26 applied a modified EMD technique for later evaluation of the oscillation presence in each component. In a different approach, Li et al.27 used the discrete cosine transform (DCT) to isolate different components in the frequency domain. Each component is transformed back to the time domain, and its regularity is evaluated for oscillation detection. These methods are included in the decomposition subgroup, in which the oscillatory components are isolated for further analysis by a simpler approach (mostly by the regularity index proposed by Thornhill et al.9). Many other recent methods are included in this subgroup.27−32 As seen, these methods are also based on the evaluation of a property followed by an if/else statement. Since they are based on rules, they occasionally return unreliable results due to the presence of features not considered during their implementation but present in industrial data. According to Thornhill et al.9 and Karra et al.,33 a good oscillation detection method has the following features:

regularity) that is later evaluated by an if/else statement. These approaches may work adequately for well-behaved oscillations. Unfortunately, industrial data are usually corrupted by noise and disturbance, the frequency and amplitude of oscillation may be irregular, the oscillation may be intermittent, and multiple oscillations may be present in the same time series. Despite this, the incoming data present a large frequency and amplitude range. For a rule-based method to work correctly, all of these influences must be incorporated into the algorithm, making it complex and extensive. Machine learning (ML) techniques and applications have been strongly explored over the last two decades. ML applied to computer vision, speech recognition, and robot control, for example, have been broadly explored by several research groups worldwide; however, many areas remain underexplored10 and therefore are excellent research and innovation opportunities. In the field of oscillation detection and diagnosis, few works based on ML have been published. This includes, for example, the works proposed by Zabiri et al.,11 Venceslau et al.,12 and Farenzena and Trierweiler13 for stiction quantification, in which stiction is a phenomenon that causes oscillation. In the big field of fault detection, ML has been extensively exploited in the last 20 years. The number of publications has been increasing faster with the larger amount of data and greater computation power. An overview and discussion on these methods are found in the recent works by Ge et al. and Wuest et al.14,15 ML techniques, by definition, do not require rules. Instead, an ML model adapts itself according to given examples to improve its performance to a specific task. Applied to oscillation detection, this means that specific rules for each of the influences are not required, making the approach simpler and more robust. This work presents a new technique for oscillation detection based on ML. Here, data with distinct features are generated artificially. A deep feedforward network (DFN) is trained using the magnitude in the frequency domain of the data as input and the labels nonoscillatory, oscillatory with regular and nonregular oscillation as output. Finally, the trained network is applied for industrial data for oscillation detection. Furthermore, because the number of oscillatory loops in an industrial plant is usually high, it is also important to rank them based on the strength of the oscillation to isolate loops that require immediate maintenance from those that do not influence plant performance. In this work, the quantification of the number of periods and oscillation amplitude is performed by a second and third network. Finally, this work also aims to introduce the application of machine learning on control loop performance monitoring (CPM), showing its potential and influencing new works in this area. Following the introduction, this work presents a short review of oscillation detection and machine learning in section 2. The proposed oscillation detection technique is introduced in section 3 and evaluated for artificial and real industrial data in section 4. Finally, the conclusions are presented in section 5.

2. REVIEW OF OSCILLATION DETECTION AND MACHINE LEARNING This section presents a brief review of oscillation detection methods followed by a short elucidation on machine learning and, more specifically, deep learning. 2.1. Oscillation Detection. In addition to the oscillation detection methods described in the introduction, many others have been published. According to Dambros et al.,16 the techniques are classified into two groups: single time series

• • • • • B

Evaluates the period and magnitude of the oscillations; Requires only time series data; is robustness to noise and disturbances; is able to handle multiple and intermittent oscillations; is completely automatic. DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 1. A generic deep feedforward network.

Moreover, according to Bauer et al.,34 simple methods are preferred by control engineers. The literature on oscillation detection is extensive, in which more than 30 works have been published. In this section, a few works on each STSOD subgroup are reviewed. For further reading, the recent and complete review by Dambros et al.16 is recommended. For a condensed review, the works by Thornhill and Horch35 and Duan et al.36 are suggested. 2.2. Machine Learning. From the statistical perspective, machine learning is an approach in which a given model is adapted by examples to perform a particular task. The type of task a machine learning technique performs is divided, broadly, into two groups: supervised and unsupervised learning. Supervised learning is the group in which the model adapts itself to a known output, while for unsupervised learning, no output is given, leaving the model to find the best structure by itself. Since for oscillation detection, the task is the classification of a time series in two known classes (oscillatory or nonoscillatory), supervised learning is the natural approach. Formally, supervised learning involves observing several examples of an input vector x and an associated output value or vector y, then learning to predict y from x.37 After training, the model is used to predict the unknown y corresponding to a new x. Among the supervised machine learning techniques, deep learning, supporting vector machines, and decision trees are broadly explored. Deep learning models are mathematical structures mimicking the function of a biological brain. Figure 1 shows a generic representation of a deep feedforward network (DFN), a simple and frequently applied deep learning architecture. The goal of a feedforward network is the approximation of a highly nonlinear function f * such that y = f *(x) maps an input x to an output y. The network defines a mapping y = f(x; θ) and learns the value of the parameters θ that result in the best approximation. Usually, a feedforward network is the composition of many different functions. For example, f(x) = f(3)(f(2)(f(1)(x))) consists of three functions, where f(1) is the computation of the first layer of the network, f(2) is the second layer, and so on. The last layer of the network is the output layer, and its output values are required to be approximately equal to y for the training points x. The behavior of the other layers, named hidden layers, is not directly specified by the training data, and it is the task of the algorithm to adapt these layers to approximate f *.37

The architecture in Figure 1 is called feedforward methods because there are no feedback connections where one or more outputs of the network are fed back to itself. Here, information flows from the input x, through the defined mapping f to the output y. The represented feedforward network has M inputs, predicts Mout outputs, and has D hidden layers, each one with Md neurons, where d is a specific layer, z(d) j is evaluated according to the following equation: z(j d) = h(d)(a(j d))

(1)

where j is a specific neuron in layer d, h(d) is the activation function, and a(d) j is equal to a(j d) =

Md − 1

∑ θji(d)zi(d − 1) + θj(0d)

(2)

i=1

(d − 1) where θ(d) = xi.38 ji are the weights. For d = 1, Md−1 = M and zi The weights θ are the adaptable parameters in the feedforward network. These parameters are optimized to minimize a specified loss function E(θ). A common loss function is the sum-of-squares error function, where

E (θ ) =

1 2

N

∑ n=1

f (xn , θ ) − yn

2

(3)

During the optimization, each round in which all the training examples are evaluated is called an epoch. Usually, the weights are updated once at the end of each epoch. However, especially for long data sets, in which the evaluation of all the examples consumes considerable computational effort, the weights are updated more than once in each epoch. The update occurs after a certain number of examples (batch) have been evaluated. The optimization is then run for a given number of epochs or until the loss function stops decreasing. The initial weight values are also of great importance since they influence the optimization convergence and speed and can significantly reduce the vanishing/exploding gradients problem.39 Many approaches for their initialization are available. As seen, the feedforward network requires a large number of specified parameters: activation functions, number of layers and neurons, optimization algorithm, batch size, and weight initialization approach, for example. These parameters are defined before the training and are called hyperparameters. The selection of the ideal hyperparameter is a fundamental and timeconsuming task. C

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

• If available, the amount may not be sufficient to train a model with good generalization; • If the amount is suitable, labeling is required. In addition to being a time-consuming task, labeling by visual inspection is subject to user interpretation about what is and what is not an oscillatory loop; • Assuming that the data are correctly labeled, it does not guarantee that the data cover a sufficiently broad range of processes and parameters. To avoid these problems, artificial data are generated for training and validation of the network. Later, the network performance is tested not only on the artificial data but also on the industrial data. Two main rules are strictly followed for data generation: (1). Artificial data must be as similar as possible to the industrial data; (2) Artificial data must have examples from processes with different dynamics, configurations, and characteristics. Following these two rules, the generated data have the following features: • oscillatory and nonoscillatory examples of different lengths, • noise and disturbance with different amplitudes, • oscillatory time series with sinusoidal, triangular, or square waveform with different numbers of periods of oscillation, • waveform smoothed with different intensities to approximate an oscillatory time series filtered by the process, and • part of the oscillatory time series with variable frequency with different intensities. Finally, the distributions of the variables used to generate the time series are presented in Table 1.

One common approach for the selection, called grid search, selects potential hyperparameters and then tests for all possible combinations of hyperparameters. This approach is sometimes unfeasible due to the high number of combinations. In this case, the random search approach is applied, in which random sets are tested. The means the DFN uses to correlate the input to the output data is usually unknown or hardly noticeable. This is a weak point on using the DFN or most of the other ML approaches. This section is not long enough to present all the aspects of deep learning and DFNs in detail. For the interested reader, we recommend the book by Goodfellow et al.37 The term deep learning adopted in this work can interchangeably be denominated artificial neural networks. The first was chosen since it has been preferred in recent works.

3. PROPOSED OSCILLATION DETECTION AND QUANTIFICATION TECHNIQUE The proposed oscillation detection technique follows the desired features presented in section 2.1. The technique is applied individually to each time series, and thus it is classified as a single time series oscillation detection (STSOD) technique. To maintain the simplicity and guarantee the reproducibility of the technique, an already implemented machine learning library is used and recommended. In this work, the Keras40 library is used with the TensorFlow41 backend. This section is divided into four subsections. First, the procedure for data generation is presented, which is followed by a simple data processing approach. Subsections 3 and 4 show the description of the oscillation detection and quantification approach, respectively. The overview of the technique is presented in Figure 2.

Table 1. Distributions of the Variables for the Artificial Data Generation variable

distribution

parameters

Parameters for All Time Series signal length (points) linear-exponential 200, 2000, 2000 noise variance exponential 0.1 disturbance amplitude exponential 1 oscillation probability Bernoulli 0.6 Parameters for Oscillatory Time Series Only number of periods linear-exponential 2, 20, 20 waveform uniform smoothing factor exponential 0.2 variable freq prob. Bernoulli 0.5 Parameters for Oscillatory Time Series with Variable Frequency Only frequency change factor linear-exponential 0.5, 0.25, 0.5

In Table 1, noise variance is the variance of a random vector with a Gaussian distribution and zero mean. Disturbance amplitude is the difference between the maximum and minimum value of a random vector with a Gaussian distribution and zero mean smoothed by the following transfer function: 1 D(s) = 2 (4) 100s + 10s + 1

Figure 2. Overview of the proposed technique.

3.1. Data Generation. In an ideal scenario, industrial labeled data (and not artificial data) are used to train the model since the final application uses industrial data. This approach is unlikely to be feasible due to the following reasons: • Industrial data are hardly available due to confidential and strategic issues;

The oscillatory time series are smoothed by the following equation: D

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 3. Distinct examples of generated time series.

Figure 4. Distribution of the variables for the generated time series.

S( s ) =

1 SF*s + 1

on the distribution are required. First, the distribution cannot start at zero; second, it is preferred to have a better resolution in the middle region because extreme values are infrequent in a real application. Once no distribution with these features is found, a new distribution is presented in this work. The distribution, named linear-exponential distribution, is the combination of a linear followed by an exponential distribution and is described in Appendix A. Finally, 106 time series are generated to train, 105 to validate, and 105 to test the DFN model. Here, the validation data set is used to select the optimum set of hyperparameters, while the test data set is used to evaluate the final model. All the oscillatory time series have oscillation amplitude equal to 1, which, in this work, is the distance between the maximum and minimum value in the magnitude. Nine distinct examples with the following features are presented in Figure 3: (1) high noise level, (2) high disturbance level, (3) sinusoidal waveform, (4) triangular waveform, (5) square waveform, (6) high-frequency oscillation, (7) lowfrequency oscillation, (8) smoothed square waveform, and (9) square waveform with variable frequency. Figure 3 shows the widely distinct features of the time series, which is an important characteristic to make the technique robust to distinct processes. The distributions of the random variables selected according to Table 1 are presented in Figure 4. The large data set provides two benefits. First, it guarantees no model overfitting.37 Second, it makes the detection performance almost independent of the ML technique.42 This makes tests with different techniques unnecessary, such as supporting vector machines and decision trees.

(5)

where SF is the smoothing factor. Finally, half of the oscillatory time series has a variable frequency, in which the frequency change factor is the difference between the highest and lowest frequency divided by its mean frequency. If a nonregular oscillation has a mean frequency equal to 0.1 and a frequency change factor equal to 0.5, the lowest and highest frequency would be about 0.075 and 0.125, for example. Also in Table 1, four different distributions are presented. The uniform distribution represents the distributions where all the elements have the same probability of being selected. The Bernoulli distribution is the discrete distribution where the given parameter is the probability of the variable taking the value 1 (true). The exponential distribution is represented by the following equation: i 1y 1 f jjjjx ; zzzz = e−x/ β β β k {

(6)

This distribution requires only one parameter (β), which is the value of x where the probability decreases by a factor of e. This distribution prioritizes small values, which is a required feature to improve the resolution in this region. To demonstrate the importance of this better resolution, take two time series with a noise variance equal to 0.01 and 0.1. An increase of 0.01 in both variances results in 100% and 10% increases, respectively. In other words, small values are more sensitive to changes. For the three other variables (signal length, number of periods, and frequency change factor), other different features E

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

library. For the sake of brevity, the mathematical foundation behind each of the values is not presented in this work. For further information, Keras documentation40 or any good book on deep learning may be reviewed (Goodfellow’s book37 is always a good choice). The hyperparameter tuning can be performed by a grid search. This approach requires the network to be trained to all 11 907 combinations of hyperparameter values, which is unfeasible. Instead, 100 sets are randomly generated and evaluated. The set that returns the best performance to the validation data set is selected for the final network. Beyond the hyperparameter tuning by random search, the softmax function is the activation function of the output layer, the sum-of-squares error (as presented in eq 3) is the loss function, and the accuracy is used to evaluate the performance of the network. The proposed technique is intended to be applied in offline mode. This follows the trend of most of the works on oscillation detection. Online detection is usually not required since detection speed is not usually crucial in terms of minutes or hours.44 If fast detection is required, the technique must be applied on a moving window, in which a data window with a fixed length is updated whenever new data are collected. 3.4. Oscillation Quantification Models. Two new DFNs are trained for the quantification of the number of periods and the oscillation amplitude. The training procedure is similar to that presented in section 3.3 for oscillation detection with few exceptions. Since it is not a classification but a regression problem, the output layer has a linear activation function, and the network performance is measured by the mean absolute error (MAE). Additionally, it makes sense to quantify only the oscillatory loops, so that the training data set is limited to the oscillatory time series. The number of periods can easily be converted to the oscillation frequency or period, which is essential information for plant-wide oscillation detection. When different loops have oscillations with similar frequencies, it is likely that the oscillation generated in one loop is propagated to the others. The output of the network y is the value obtained by the equation

The range of the parameters in Table 1 was made large in order to include examples of the most distinct process conditions. Thus, it virtually guarantees that any time series extracted from the industry have a similar pair in the artificial data set. In the rare cases in which the time series has different features or features outside the range in Table 1, the precision in the detection and quantification is not guaranteed until the model is retrained with data that encompass the new features. The authors are aware of the extensive and detailed procedure for the generation of artificial data. Thus, the authors make themselves available to make the data accessible, by request, to those interested. 3.2. Data Processing. Before training the network with the generated artificial data, two simple data processing procedures are still required. The first is the normalization of the magnitude of all the time series to a value equal to 1. It guarantees that the detection is not influenced by the source of the time series, where its magnitude may vary largely if it is a pressure or concentration measurement, for example. The second step is the transformation of the time series from the time to the frequency domain and the isolation of the magnitude information. The magnitude in the frequency domain is preferred for the following reasons: • The time series have different lengths. Transforming these data by discrete Fourier transform (DFT) with a fixed number of components guarantees that the data in the frequency domain have a constant length. • The magnitude of the transformed data is independent of the oscillation phase; furthermore, the training step is faster and less sensitive to noise, as noted by Nanopoulos et al.43 The magnitude in the frequency domain can be computed by the absolute value of the complex vector obtained by fast Fourier transform. In this work, DFT with a fixed length equal to 213 is applied to the time series. The first half of the DFT magnitude is saved for later use; the second half is discarded since it is a mirrored representation of the first. 3.3. Oscillation Detection Model. The oscillation detection model is, broadly, the deep feedforward network (presented previously in Figure 1) trained with the training data set (generated in section 3.1), where the input x is the magnitude of the DFT, and the output y is labeled as nonoscillatory, regular oscillation, or nonregular oscillation. As discussed in section 2.2, the network requires the tuning of a large number of hyperparameters. For this application, the values in Table 2 are preselected. The preselected values for the layers hyperparameter are vectors where each value is the number of neurons in each layer. The values’ names are the same as those used in the Keras

yn

batch size optimization algorithm initialization activation function layers

i n per yz zz = log10jjj k len*NFFT {

(7)

where nper is the number of periods, len is the length of time series, and NDFT is the length of the DFT. The ratio inside the logarithm function transforms nper into a value close to the peak in the DFT, and the logarithm function is used to increase the resolution of small values. The oscillation amplitude is the parameter that identifies the seriousness of an oscillation. This information can be used to isolate oscillations that require immediate maintenance from those that do not affect plant performance. Here, the output value y is

Table 2. Preselected Values for the Hyperparameters hyperparameter

per

values

yamp =

3000, 10000, 30000 SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

1 max − min

(8)

where max and min are the maximum and minimum values of the time series, respectively, before the normalization in section 3.2. The quantified yamp is the ratio of the amplitude that corresponds to the oscillation, disregarding the noise and disturbance. After quantification, the ratio must be multiplied back to (max − min) to obtain the true oscillation amplitude.

uniform, lecun_uniform, normal, glorot_normal, glorot_uniform, he_normal, he_uniform softmax, softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear, elu [400, 60], [200, 40], [100, 20], [50, 10], [50, 5], [400, 100, 20], [200, 50, 10], [100, 25, 10], [100, 25, 5] F

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

is highly probable that each of the time series in the testing data has a similar pair in the training data. Figure 6 presents eight selected misclassified time series, whose labels are expected (predicted) and 0, 1, and 2 are nonoscillation, regular oscillation, and nonregular oscillation, respectively. The classification of a nonoscillatory time series as oscillatory is due to strong disturbances that randomly generate a pattern similar to an oscillation. The misclassification of the oscillatory examples is usually due to the low number of periods and short length, where the oscillation is easily hidden by a strong variance. Noise and disturbance are the influences of higher concern on oscillation detection. In Figure 7, the examples with the strongest noise and disturbance effects are presented (0.1% strongest), in which, again, the labels follow the notation expected (predicted). The accuracy of this analysis was degraded to 70%. The misclassified examples are due to oscillations hidden in strong noise or disturbance. Note that the classification of these examples is difficult even by visual inspection. For the analyses of the time series with 1% strongest noise and disturbance influence, the accuracy was higher than 90%. Also from Figure 5, it is seen that the misclassification is slightly more frequent in nonregular oscillations. Nonregularity in time domain spreads the oscillation power in the frequency domain, which results in a larger oscillatory band with lower amplitude.46 Thus, the peak in the frequency domain is less apparent, harming the detection. In industrial data, nonregular oscillations usually have the frequency slightly altered around a mean. This knowledge was incorporated into the detection model, which is able to deal with most of the nonregular oscillatory signals. For the quantification networks, Figure 8 and Figure 9 show the distribution of the examples that returned the worst results when compared to the expected values. As seen in Figure 8, the worst results seem to have a random distribution. The distribution for the number of periods is an exception, where a region from 100 to 120 inexplicably returned the worst results. Figure 9 shows the distribution for the amplitude quantification network. Here, the time series with a low number of periods returned the worst results. Also, the returned results are worse for nonregular oscillations. 4.2. Evaluation of Industrial Data. The final goal of the technique is the industrial application; thus, the real practical results are the evolution of the technique on industrial data. In this section, the proposed technique is evaluated on two data sets. The first data set presents examples borrowed from Jelali and Huang,45 while the second data set exhibits time series extracted from a Brazilian refinery. Some time series are evaluated in more detail and compared to the results of six other oscillation detection methods extracted from a recent comparative work by Dambros et al.46 4.2.1. Data Borrowed from Literature. The data borrowed from Jelali and Huang45 is a benchmark data set on oscillation detection and diagnosis. The data set contains the controller output (OP) and process output (PV) measurements from 93 control loops. The measurements are mostly from chemical industries and loops in which the flow is the controlled variable. The data are very distinct, with lengths ranging from 200 to 277 115 points, amplitudes ranging from 0.0184 to 2303.4, and oscillations in different waveforms and frequencies. The complete results obtained by the application of the proposed technique to the 93 control loops are presented in Supporting

In addition to the two quantified parameters presented in this section, a similar procedure can be applied to the quantification of the other features present in Table 1.

4. TECHNIQUE EVALUATION The oscillation detection, number of periods quantification, and oscillation amplitude quantification networks are evaluated in this section on the artificial testing data set and two industrial data sets. The most computationally expensive step on the technique application is the search for the model’s hyperparameters. Even then, the computational time at this stage was less than 1 h for the search in a high-performance computer in which the training was performed in an Nvidia TITAN Xp Graphics Card. For a simple home computer with the training running in a single processor core, the computational time may reach 1 day. For simplicity, the hyperparameters found in this work can be used, and thus hyperparameter tuning is not necessary. With the hyperparameters in hand, the training of the final models is fast, in the order of a few minutes. 4.1. Evaluation on Artificial Data. First, the sets of hyperparameters that returned the best performance to the validation data set are presented in Table 3. As seen, the hyperparameters differ according to the network task, which reflects the significance of tuning the hyperparameters individually to each network. Table 3. Hyperparameter Sets Obtained by a Random Search Technique hyperparameter

oscil detection

number of per quant

oscil amplitude quant

batch size optim. algorithm initialization activ. function layers

10000 Adam he_uniform hard_sigmoid [400, 100, 20]

3000 Adamax uniform hard_sigmoid [200, 50, 10]

3000 Adam glorot_uniform sigmoid [100, 25, 5]

The performance of the networks is then evaluated to the training, validation, and test data sets generated in Section 3.1, and the returned results are shown in Table 4. The performance is lower to the validation and test data set, as expected, but the small differences do not mean critical overfitting. Table 4. Performance of the Three Networks to the Training, Validation, and Test Datasets hyperparameter

oscil detection (accuracy)

number of per quant (MAE)

oscil amplitude quant (MAE)

train validation test

0.9961 0.9771 0.9746

0.0592 0.0637 0.0628

0.0261 0.0530 0.0526

As seen in Table 4, the accuracy of the oscillation detection network is higher than 97% for the test data set, which is a good result. Figure 5 shows the distribution of misclassified examples. The figure indicates that the misclassification usually occurs due to a short length, a low number of periods, and a high disturbance variance. Additionally, the misclassified examples are usually oscillatory time series classified as nonoscillatory. The accuracy over 97% on the testing data is achievable because the data set has a large number of examples, and both the training and testing data have a similar distribution. Thus, it G

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 5. Distribution of the misclassified time series.

Figure 6. Examples of misclassified time series.

Figure 7. Examples in the testing data set with the 0.1% strongest noise and disturbance.

Information. The overall results for the OP and PV time series are shown in Table 5. The technique detects the oscillations in almost all the provided time series, and most of these oscillations are nonregular. Additionally, a slight difference is observed between the results for the OP and PV time series. It is known that most of the loops in the data set are oscillatory,47 but, usually, the conventional techniques identify a lower number of oscillatory loops, which indicates that the proposed technique is susceptible to weak oscillations. For further analyses, consider the selected loops with distinct features shown in Figure 10.

The results for the oscillation detection by the proposed technique are presented in Table 6. Also in this table, the results extracted from Dambros et al.46 for six well-established detection techniques. As seen, the proposed technique returned the expected results for all the time series, while the other techniques returned two to five correct results. By the analysis, it is possible to conclude that • Loop CHEM01 shows a typical case of oscillation caused by stiction. The waveform is not a problem for any of the methods, but the mean nonstationarity, even weak, causes misclassification by the Thornhill method. H

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 8. Distribution of the 5% worst results for the quantification of the number of periods.

Figure 9. Distribution of the 5% worst results for the quantification of the oscillation amplitude.

In Figure 10, the efficiency of the quantification is easily verified. In loop CHEM01, for example, the time series clearly has approximately 10 periods of oscillation. The magnitude of the OP and PV time series ranges from approximately 36 to 37 and from 56 to 58, which gives the amplitude of oscillation equal to approximately 1 and 2. These approximated values match those presented in Table 7. All other time series can be verified in the same way. Finally, two important details are observed. First, when the oscillation amplitude is variable, the quantified value is approximately the mean value. Second, when the time series have multiple oscillations, an intermediary value for the number of periods is returned. This intermediary value, seen in the OP measurement in loop CHEM20, for example, does not represent any useful information. Thus, the quantification of the number of periods of oscillations in a time series with multiple oscillations by the technique is not efficient. This behavior is expected since time series with multiple oscillations were not included in the training data set. The inclusion, however, would be an exhausting task, in which many different combinations of oscillation frequencies with a different number of oscillatory components would be required. The evaluation of the time series by the proposed technique is not a time-consuming task. With the trained networks, the complete analysis is performed in only a few seconds, even on a simple home computer.

Table 5. Summary of the Results on the Dataset Borrowed from Jelali and Huang45 OP PV

nonoscillatory

regular oscillation

nonregular oscillation

9 4

29 31

55 58

• Loop CHEM09 is a clear nonoscillatory loop incorrectly classified as oscillatory by the Zakharov method. • Loops CHEM17 and CHEM21 present nonregular oscillations, and the time series are also affected by strong mean nonstationarity. Both features contribute to misclassification by many methods. • The time series in loop CHEM20 shows multiple lowfrequency, nonregular oscillations (more evident in the PV measurement), which are most likely the cause of misclassification by three methods. • Loop CHEM31 shows a time series with a high number of periods of oscillation and a long length. Again, the proposed technique is not affected by these different features, which is not true for the other 2 evaluated methods. These results show the capacity of the proposed technique for the detection of time series with the different features usually found in industrial data. The oscillation quantification of the selected time series by the proposed technique is presented in Table 7. I

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 10. Examples of time series present in the borrowed data set.

Table 6. Evaluation of the Proposed Technique on the Borrowed Dataset, Where C Represents a Correct Result, and W Represents an Incorrect Result loop

expected

proposed

Thornhill

Miao

Forsman

Zakharov

Srinivasan

Li

CHEM01 CHEM09 CHEM17 CHEM20 CHEM21 CHEM31

1 0 1 1 1 1

C C C C C C

W C W C W W

C C C C W C

C C C W W W

C W C C W C

C C W W W C

C C C W C C

4.2.2. Data Extracted from Industry. Since the first data set consists almost exclusively of oscillatory time series, a second data set is also analyzed. This presents 602 measurements (OP, PV, and/or MV measurements, among others) from 191 loops from a refinery extracted over a period of 2 days. In this section, selected time series and fragments are selected and analyzed in detail. The selected fragments are presented in Figure 11, where the time series on the left and on the right correspond to time windows of 1 and 48 h, respectively. The time series were selected to test the techniques on time series with different features, such as low and high-frequency oscillation, nonregular

Table 7. Quantification of the Selected Examples Borrowed from the Literature number of periods

amplitude

loop

OP

PV

OP

PV

CHEM01 CHEM17 CHEM20 CHEM21 CHEM31

9.927 66.612 4.404 42.551 1019.874

9.927 66.612 12.164 42.551 1019.194

0.962 1.140 4.223 4.640 1.293

1.663 1.678 2.572 0.859 1558.549

Figure 11. Examples of time series extract from a refinery (normalized due to confidential issues). J

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Table 8. Oscillation Detection Applied to the Dataset Extracted from a Refinery, Where C Represents a Correct Result, and W Represents an Incorrect Result time

expected

proposed

Thornhill

Miao

Forsman

Zakharov

Srinivasan

Li

01H01 01H02 01H03 01H04 01H05 01H06 48H01 48H02 48H03 48H04 48H05 48H06

1 1 1 1 0 0 1 1 1 1 0 0

C C C C C C C C C C C C

C C C W C C W W W W C C

C C C W C C C C C C C C

W C C W C C C C W W C C

C C C C C C C C W W C C

C C C C C C W C C W C W

C C C W C C C C C C W C

Table 9. Quantification of the Selected Examples (Normalized Due to Confidential Issues) time series

01H01

01H02

01H03

01H04

48H01

48H02

48H03

48H04

number of period amplitude

14.235 0.506

32.561 0.320

48.709 0.673

9.909 0.506

40.513 0.478

41.291 0.409

26.049 0.437

102.453 0.301

to increase the frequency range of coverage of the proposed technique.

oscillation, noise, and disturbance, intermittence, and saturation. The time series present an amplitude ranging from 0.01 to 200. Here, the time series were normalized to amplitude equal to 1 and a mean equal to 0 (after the application of the techniques) due to confidential issues. The results of the oscillation detection by the proposed technique and the six other selected techniques are presented in Table 8. Again, the proposed technique correctly detects the oscillation in all time series, while the efficiency of the other techniques ranges from 7 to 11 correct detections over the 12 time series. The quantification of the time series by the proposed technique is presented in Table 9. Compared to Figure 11, the results are close to the expected, with the exception of the number of periods quantified by loop 01H03. The incorrect result was also caused by the presence of multiple oscillations. As seen through the analyses of both data sets, the technique is very sensitive to weak oscillations. This could cause alarm floods in real industrial application. However, since the oscillation amplitude is quantified, weak oscillations can be dismissed by the selection of a threshold that defines the minimum amplitude for the detection. Oscillations, even weak, often hide a problem whose detection may be relevant. Therefore, it is preferred to report any obtained information about the oscillation independent of its magnitude.48 Also from both data sets, Miao and Li methods reported worse but similar results compared to the proposed technique. It is known from Dambros et al.46 that these methods have high sensitivity and low specificity, which means they tend to report oscillation in nonoscillatory signals. Since most of the signals tested are oscillatory, the methods’ accuracy is high, which does not mean good performance. The reader can check the work by Dambros et al.46 for deeper analyses. Note that the original data window size is large (2 days). The direct application of the technique to only the raw data favors the detection of low-frequency oscillations, while high-frequency oscillations can mostly be misclassified as noise. The window size decision is a known problem on oscillation detection whose solution is still open. The following procedure is recommended

(1) (2) (3) (4)

Evaluate the full-length data set by the method; Downsample the data by a factor of 10; Evaluate the sampled data; If the length of the data is greater than 2000 (10 times the lower bound in the training data set), go to step 2.

Even with an increase in coverage, the detection in time series with less than two periods of oscillation or less than three points per cycle may be incorrect.

5. CONCLUSION This work presents a new oscillation detection and quantification technique based on machine learning. More specifically, three deep feedforward networks are trained with artificial data with distinct features aimed at oscillation detection, the number of periods quantification, and the oscillation amplitude quantification. The technique was tested using artificial and industrial data. Even being based on the DFT, the technique was able to learn the different features found on industrial time series, which includes data with noise, disturbances (mean nonstationarity), intermittence, saturation, and nonregularity in oscillations. In addition, the technique is robust for triangular and square waveforms, which are common shapes of nonlinearity induced oscillations. These waveforms generate harmonics in the frequency domain, which were disregarded. As a disadvantage, the number of periods quantified in the case of multiple oscillations is unreliable. For future works, the training of models for the specific detection and quantification of multiple oscillations is recommended. In this work, only the number of periods and the oscillation amplitude were quantified. Also, other properties could be measured by a similar approach, such as the noise variance, disturbance amplitude and level of oscillation nonregularity. Additionally, similar procedures could be applied to different fields in control loop performance monitoring (CPM), such as controller performance assessment, root cause analysis, stiction detection, and stiction quantification. K

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research

Figure 12. Proposed distributions for m, c, and M are equal to 0, 1, and 1, respectively.



The data generation is the most extensive procedure required for the application of the proposed technique. Even using the data available (by request) by the authors, it is strongly suggested that future works use industrial data combined with data augmentation techniques to fulfill the data requirements of the machine learning technique. Thus, there would be a greater guarantee that the data collected from the real industrial application are closer to those used for training.



The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.iecr.9b01456. Table of the complete results obtained by the proposed technique applied to the dataset borrowed from Jelali and Huang, 2010 (PDF)



APPENDIX A: LINEAR-EXPONENTIAL DISTRIBUTION

*E-mail: [email protected]. ORCID

Jônathan W. V. Dambros: 0000-0002-4584-2952 Notes

The authors declare no competing financial interest.



ACKNOWLEDGMENTS The development of this work was only possible thanks to the support of many contributors. The authors are very grateful for the scholarship provided by CAPES under Edital CAPES no. 15/2017, the Nvidia TITAN Xp Graphics Card provided by Nvidia under the GPU Grant Program, the space provided by the University of Kaiserslautern, the industrial data provided by Jelali and Huang, and all the support provided by Petrobras and Trisolutions. Additionally, this work was partly funded by the German Research Foundation (DFG) Awards KL 2698/2-1 and GRK1589/2 and by the Federal Ministry of Science and Education (BMBF) Awards 031L0023A and 01IS18051A.

(9)

where m is the beginning point of the linear part, c is the transition point, M is the length required for the exponential part to decrease by e, and λ=

1 M+c−m

AUTHOR INFORMATION

Corresponding Author

The probability density function (pdf) of the linear-exponential distribution is l 0 x < m, o o o o o m ≤ x < c, λ f (x ; m , c , M ) = m o o o o −(x − c)/ M o o x ≥ c, n λe

ASSOCIATED CONTENT

S Supporting Information *

(10)

The cumulative distribution function is given by



F (x ; m , c , M ) l 0 x < m, o o o o o o λ (x − m) m ≤ x < c, =o m o o o −(x − c)/ M o o ) x≥c n λ(c − m) + λ(M − Me

REFERENCES

(1) Bialkowski, W. L. Dreams vs. Reality: A View from Both Sides of the Gap. Pulp and Paper, Canada 1994, 19−27. (2) Ender, D. B. Process Control Performance: Not as Good as You Think. Control Eng. 1993, 40 (10), 180−190. (3) Torres, B. S.; Carvalho, F. B.; Fonseca, M. O.; Filho, C. S. Performance Assessment of Control Loops − Case Studies. In Proc. IFAC ADCHEM; Gramado, Brasil, 2006. (4) Desborough, L.; Miller, R. Increasing Customer Value of Industrial Control Performance Monitoring  Honeywell’s Experience. AIChE Symp. Ser. 2002, 153−186.

(11)

The linear-exponential distributions for m, c, and M equal 0, 1, and 1, respectively, are exemplified in Figure 12. The distribution is simple to implement and has intuitive parameters. L

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX

Article

Industrial & Engineering Chemistry Research (5) Paulonis, M. A.; Cox, J. W. A Practical Approach for Large-Scale Controller Performance Assessment, Diagnosis, and Improvement. J. Process Control 2003, 13 (2), 155−168. (6) Choudhury, M. A. A. S. Automatic Detection and Estimation of Amplitudes and Frequencies of Multiple Oscillations in Process Data. In ADCONIP 2014; Hiroshima, Japan, 2014; pp 514−519. (7) Hägglund, T. A Control-Loop Performance Monitor. Control Eng. Pract. 1995, 3 (11), 1543−1551. (8) Miao, T.; Seborg, D. E. Automatic Detection of Excessively Oscillatory Feedback Control Loops. In Proceedings of the 1999 IEEE International Conference on Control Applications (Cat. No.99CH36328); IEEE, 1999; Vol. 1, pp 359−364. (9) Thornhill, N. F.; Huang, B.; Zhang, H. Detection of Multiple Oscillations in Control Loops. J. Process Control 2003, 13 (1), 91−100. (10) Jordan, M. I.; Mitchell, T. M. Machine Learning: Trends, Perspectives, and Prospects. Science (Washington, DC, U. S.) 2015, 349 (6245), 255−260. (11) Zabiri, H.; Maulud, A.; Omar, N. NN-Based Algorithm for Control Valve Stiction Quantification. WSEAS Trans. Syst. Control 2009, 4 (2), 88−97. (12) Venceslau, A. R. S.; Guedes, L. A.; Silva, D. R. C. Artificial Neural Network Approach for Detection and Diagnosis of Valve Stiction. In Proceedings of 2012 IEEE 17th International Conference on Emerging Technologies & Factory Automation (ETFA 2012); IEEE, 2012; pp 1−4. (13) Farenzena, M.; Trierweiler, J. O. A Novel Technique to Estimate Valve Stiction Based on Pattern Recognition; Elsevier Inc., 2009; Vol. 27. (14) Ge, Z.; Song, Z.; Ding, S. X.; Huang, B. Data Mining and Analytics in the Process Industry: The Role of Machine Learning. IEEE Access 2017, 5, 20590−20616. (15) Wuest, T.; Weimer, D.; Irgens, C.; Thoben, K. D. Machine Learning in Manufacturing: Advantages, Challenges, and Applications. Prod. Manuf. Res. 2016, 4 (1), 23−45. (16) Dambros, J. W. V; Trierweiler, J. O.; Farenzena, M. Oscillation Detection in Process Industries–Part I: Review of the Detection Methods. J. Process Control 2019, 78, 108−123. (17) Thornhill, N. F.; Shah, S. L.; Huang, B. Detection and Diagnosis of Unit-Wide Oscillations. In Process Control and Instrumentation 2000 (PCI2000); 2000; pp 26−28. (18) Thornhill, N. F.; Shah, S. L.; Huang, B. Detection of Distributed Oscillations and Root-Cause Diagnosis. In Proceedings of CHEMFAS 4; Jejudo Island, Korea, 2001; pp 167−172. (19) Forsman, K.; Stattin, A. A New Criterion for Detecting Oscillations in Control Loops. In Control Conference (ECC), 1999 European; Karlruhe, Germany, 1999; pp 2313−2316. (20) Zakharov, A.; Zattoni, E.; Xie, L.; Garcia, O.; Jämsä-Jounela, S. An Autonomous Valve Stiction Detection System Based on Data Characterization. Control Eng. Pract. 2013, 21 (11), 1507−1518. (21) Zakharov, A.; Jämsä-Jounela, S. Robust Oscillation Detection Index and Characterization of Oscillating Signals for Valve Stiction Detection. Ind. Eng. Chem. Res. 2014, 53 (14), 5973−5981. (22) Zhang, K.; Huang, B.; Ji, G. Multiple Oscillations Detection in Control Loops by Using the DFT and Raleigh Distribution. IFACPapersOnLine 2015, 48 (21), 529−534. (23) Matsuo, T.; Sasaoka, H.; Yamashita, Y. Detection and Diagnosis of Oscillations in Process Plants. In Knowledge-Based Intelligent Information and Engineering Systems; 2003; pp 1258−1264. (24) Matsuo, T.; Tadakuma, I.; Thornhill, N. F. Diagnosis of a UnitWide Disturbance Caused by Saturation in a Manipulated Variable. In IEEE Advanced Process Control Applications for Industry Workshop; IEEE: Vancouver, Canada, 2004; pp 1−9. (25) Matsuo, T. Application of Wavelet Transform to Control System Diagnosis. In IEE Seminar on Control Loop Assessment and Diagnosis; IEE: London, 2005; Vol. 2005, pp 81−88. (26) Srinivasan, R.; Rengaswamy, R.; Miller, R. A Modified Empirical Mode Decomposition (EMD) Process for Oscillation Characterization in Control Loops. Control Eng. Pract. 2007, 15 (9), 1135−1148. (27) Li, X.; Wang, J.; Huang, B.; Lu, S. The DCT-Based Oscillation Detection Method for a Single Time Series. J. Process Control 2010, 20 (5), 609−617.

(28) Ullah, M. F.; Das, L.; Parmar, S.; Rengaswamy, R.; Srinivasan, B. On Developing a Framework for Detection of Oscillations in Data. ISA Trans. 2019, 89, 96. (29) Wang, J.; Huang, B.; Lu, S. Improved DCT-Based Method for Online Detection of Oscillations in Univariate Time Series. Control Eng. Pract. 2013, 21 (5), 622−630. (30) Xie, L.; Lang, X.; Horch, A.; Yang, Y. Online Oscillation Detection in the Presence of Signal Intermittency. Control Eng. Pract. 2016, 55, 91−100. (31) Aftab, M. F.; Hovd, M.; Sivalingam, S. Improved Oscillation Detection via Noise-Assisted Data Analysis. Control Eng. Pract. 2018, 81, 162−171. (32) Xie, L.; Lang, X.; Chen, J.; Horch, A.; Su, H. Time-Varying Oscillation Detector Based on Improved LMD and Robust Lempel− Ziv Complexity. Control Eng. Pract. 2016, 51, 48−57. (33) Karra, S.; Jelali, M.; Karim, M. N.; Horch, A. Detection of Oscillating Control Loops. In Detection and Diagnosis of Stiction in Control Loops; Springer-Verlag: London, 2010; pp 61−100. (34) Bauer, M.; Horch, A.; Xie, L.; Jelali, M.; Thornhill, N. The Current State of Control Loop Performance Monitoring − A Survey of Application in Industry. J. Process Control 2016, 38, 1−10. (35) Thornhill, N. F.; Horch, A. Advances and New Directions in Plant-Wide Disturbance Detection and Diagnosis. Control Eng. Pract. 2007, 15 (10), 1196−1206. (36) Duan, P.; Chen, T.; Shah, S. L.; Yang, F. Methods for Root Cause Diagnosis of Plant-Wide Oscillations. AIChE J. 2014, 60 (6), 2019− 2034. (37) Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press, 2016. (38) Bishop, C. M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer-Verlag: Berlin, 2006. (39) Géron, A. Hands-On Machine Learning with Scikit-Learn and TensorFlow; O’Reilly Media, Inc., 2017. (40) Chollet, F.; et al. Keras, 2015, https://keras.io. (41) Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: LargeScale Machine Learning on Heterogeneous Systems, 2015. (42) Banko, M.; Brill, E. Scaling to Very Very Large Corpora for Natural Language Disambiguation. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics; 2001; pp 26−33. (43) Nanopoulos, A.; Alcock, R.; Manolopoulos, Y. Information Processing and Technology; Mastorakis, N., Nikolopoulos, S. D., Eds.; Nova Science Publishers, Inc.: Commack, NY, USA, 2001; pp 49−61. (44) Horch, A. Oscillation Diagnosis in Control Loops ∼ Stiction and Other Causes. In 2006 American Control Conference; IEEE: Minneapolis, MN, USA, 2006; pp 2086−2096. (45) Jelali, M.; Huang, B. Detection and Diagnosis of Stiction in Control Loops; Jelali, M., Huang, B., Eds.; Advances in Industrial Control; Springer: London, 2010. (46) Dambros, J. W. V; Trierweiler, J. O.; Farenzena, M.; Kempf, A.; Longhi, L. G. S.; Teixeira, H. C. G. Oscillation Detection in Process Industries–Part II: Industrial Application. J. Process Control 2019, 78, 139−154. (47) Jelali, M.; Scali, C. Comparative Study of Valve-StictionDetection Methods. In Detection and Diagnosis of Stiction in Control Loops: State of the Art and Advanced Methods; Jelali, M., Huang, B., Eds.; Springer: London, 2010; pp 295−358. (48) Horch, A. Benchmarking Control Loops with Oscillations and Stiction. In Process Control Performance Assessment; Ordys, A., Uduehi, D., Johnson, M., Eds.; Advances in Industrial Control; Springer: London, 2007; pp 227−257.

M

DOI: 10.1021/acs.iecr.9b01456 Ind. Eng. Chem. Res. XXXX, XXX, XXX−XXX