Intelligent Alarm Management Applied to ... - ACS Publications

Mar 1, 2013 - Intelligent Alarm Management Applied to Continuous. Pharmaceutical .... All types of manufacturing operations can benefit from an effect...
0 downloads 0 Views 3MB Size
Article pubs.acs.org/IECR

Intelligent Alarm Management Applied to Continuous Pharmaceutical Tablet Manufacturing: An Integrated Approach Anshu Gupta,† Arun Giridhar,† Venkat Venkatasubramanian,‡ and Gintaras V. Reklaitis*,† †

School of Chemical Engineering, Purdue University, West Lafayette, Indiana, United States Department of Chemical Engineering, Columbia University, New York City, New York, United States



ABSTRACT: One important aspect of effective real time process management is the implementation of intelligent systems that can assist human operators in making supervisory control decisions. Conventional practice is to simply sound alarms when process variables go out of range leaving it to the operator to interpret the alarm patterns and choose mitigation strategies. Failure of the operator to exercise the appropriate mitigation actions often has an adverse effect on product quality, process safety, and the environment. The difficulties associated with implementing intelligent control and the opportunities for improvements are even greater in the pharmaceutical manufacturing of solid oral dosage products due to specific processing challenges associated with particulate and granular systems. The advent of the Process Analytical Technology (PAT) initiative developed by the FDA has given the pharmaceutical industry an opportunity to apply systems engineering tools that encourage innovation in drug manufacturing. In this work, an intelligent alarm system (IAS) framework has been developed to deal with fault detection, diagnosis, and mitigation of conditions that result from process anomalies. The integrated framework, using wavelet analysis, principal component analysis, signed directed graphs, and qualitative trend analysis, along with ontological based knowledge framework, helps in quick detection and diagnosis of process faults. This reduces the likelihood of abnormal event progression, production disruptions, and productivity losses. The key feature of this framework is that it provides a mitigation strategy to the operator along with rationalized alarm thresholds, which help in reducing the workload and facilitate in taking corrective action.

A

$20 billion is lost by the petrochemical sector due to abnormal events. Presently, while alarm systems are generally effective at detecting faults, they provide minimal actionable information to the operator regarding the cause of the event. It is thus left to the discretion of the operator to use his experience/knowledge to take appropriate corrective action. It has been found that 75% of major accidents are due to human error.4 The dependence on the operator to take necessary actions during abnormal situations is likely to have led to the increased number of accidents. One approach, therefore, is to provide operators with the necessary information to carry out the required task. Hence, there is need for an effective and practical framework, which can quickly detect and diagnose the abnormal event and provide that required information. In this work, an intelligent alarm system (IAS) framework has been developed for quick detection and diagnosis of an abnormal event. The essential feature of the framework is that, along with diagnosing the fault, it also provides mitigation strategies to the operator. This helps in reducing the workload on the operator, facilitates in smoother operation and, thus, hopefully results in reduced levels of off-spec product and fewer accidents. Also, the operator is provided with selected primary variables, marked with various rationalized threshold for better monitoring of the system. There are multiple needs and

larm systems are one of the important tools that process plant operators use to improve plant performance and monitor plant safety. According to the latest ISA guidelines,1 an alarm is defined as: “An audible and/or visible means of indicating to the operator an equipment malfunction, process deviation, or abnormal condition requiring a response”. Hence, it is a declaration of a process variable going over a specified threshold and includes audible sounds, graphical indications, and messages that require the attention of the operator. The two most important functions of an alarm system are (1) to detect an abnormal event and warn the operator and (2) not to mislead, overload, or distract the operator. However, a recent survey of alarm systems in the chemical and power industries2 shows that (i) plant operators are often heavily burdened with alarms both during steady-state operation and following plant upsets and (ii) most alarms are of little value to operators and are considered nuisance alarms. An alarm is considered to be a nuisance when it does not provide any useful information to the operator as a result of which operators would typically do nothing except to press the button to acknowledge and silence them. They also can tend to mislead the operator and obscure his/her view of critical information that could lead to potentially severe consequences. Global competitiveness has required increased production efficiencies, which includes reducing the amount of waste produced. More efficient and highly integrated plants often have also tended to be more complex operations, which has further increased the likelihood of abnormal events. According to the US Chemical Safety Board around 65 serious plant accidents occurred, over the past decades.3 The Abnormal Situation Management (ASM) consortium has estimated that © 2013 American Chemical Society

Special Issue: John MacGregor Festschrift Received: Revised: Accepted: Published: 12357

December 17, 2012 February 25, 2013 March 1, 2013 March 1, 2013 dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

advantages associated with a good alarm system framework. These include the following: Improving Process Safety. The proposed framework strives to lessen the workload on the operator and provide them with useful information to mitigate the fault, hence helping in reducing accidents and near misses. Reducing Cost. Better alarm management translates to less off-spec products formed during an upset situation. Also, the process can run at the required specification for longer time period. All these translate to lower operating cost. Capturing Workforce Knowledge. According to Power Engineering, nearly 20% of the entire workforce will reach retirement age by 2015 and nearly 40% by 2020.4 Hence, there is a need for capturing that workforce knowledge and helping the replacement workforce to maintain process efficiency. IAS provides a structured way to not only capture important operating knowledge through the use of repositories such as TOPS (discussed later) but also to retrieve it when it becomes relevant. Handling Shift Handover. One of the critical jobs that an operator has to perform is to brief the incoming operator on the shift. There are no formal guidelines on the kind of information passed on. An effective alarm system can provide a checklist of information that needs to be passed on. The IAS framework with the help of its ontological database will contain the knowledge of any alarms that may have been disabled or went off during a shift and will provide this information to the incoming operator.

Figure 1. Process condition model.

region. The orange (upset) region is where the product produced will be off spec and may have to be discarded. Alarms are activated once the threshold for NOC is crossed. Once, a process variable goes out of acceptable NOC region, the diagnostic framework kicks in and provides a mitigation strategy to the operator. If the operator is not able to bring the process back to NOC or contain it within the upset region, it moves toward emergency shutdown (ESD). The ESD zone is designated in red. ESD consists of a preplanned sequence of actions, which is designed to shut down the plant when plant operations are too close to unsafe conditions. ESD’s in general will be context dependent and thus there may be multiple ESD’s prepared and in readiness. All plants strive to avoid ESD primarily for safety but also for economical reasons. Alarm thresholds are designed to mark the boundaries of the various zones of the process condition model. A set of allowable range of values for each process variables is needed to specify the actual alarm thresholds. This implies that an operating region (called the design space in the pharma context) needs to be created by either using a model for the given system or empirically through experiments/simulation. The threshold values are critical to the functioning of the process, and an algorithm to determine the values has been proposed by Gupta5 et al. The aim of the IAS is to reduce the workload of the operator without compromising the safety and productivity of the plant. It is thus necessary to identify and configure vital alarms. “Vital” is not synonymous with “emergency” but corresponds to anything that, if not managed properly, may lead to the production of off-spec products or could endanger plant safety. Following these criteria for IAS, there are four distinct stages in designing an alarm system: Action. The alarm should only be configured if it requires operator attention, but it need not be the other way round, i.e. not every operator action should be preceded by an alarm. Also, only important abnormalities should be alarmed. This helps in controlling the number of alarms, which has been one of the continuing problems in the industry. Response. The alarm system should be configured in such a way that it should be able to detect the process deviation and raise a flag, indicating to the operator that something has gone wrong in the system. Information. The system should be able to provide adequate information to the operator regarding the kind of fault that has occurred in the system. It should also be able to provide mitigation strategy to the operator, which could help the operator restore the process back to normal state.

1. INTELLIGENT ALARM SYSTEM (IAS) The main requirement of an IAS is to understand, design, and apply effective alerting capability for the benefit of plant operators. By definition, the exceptional events associated with a process system are those that arise due to unexpected or special cause disturbances, which regulatory controls, that are designed to mitigate common cause disturbances, cannot address. Nevertheless, abnormal events present themselves with warnings, which often involve combinations of process variable trends with subtle differences. Particularly with exceptional events (EEs) characterized by trends involving multiple variables simultaneously, operators may not easily recognize these warning signs. However, a well-planned alarm system, one designed from the point of view of abnormal events not identifiable through tracking of a trend involving just a single variable, can facilitate capture of a greater number of these warnings. This helps in rapid detection and diagnosis of the exceptional events and possible prevention of emergency shutdowns. IAS has adopted the “process condition model”,1 as shown in Figure 1, for regular monitoring of process variables. The model uses multiple thresholds, from normal and target conditions to the abnormal conditions. The idea is to reduce the number of alarms to which an operator has to respond by rationalizing each threshold and providing useful information. The green (target) zone represents the range of desirable operating conditions and is likely related to highest yield or lowest cost associated with the process. The yellow region is the normal operating condition (NOC) zone within which the product produced is still within the required specification. It represents the acceptable operating region for the process and consequently there are no alarms if the process moves from the target to the NOC region. Nevertheless, the operator may choose to take action to keep the process in NOC or target 12358

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

Time. The above stages should be carried out in timely fashion such that it provides enough time to the operator to successfully remedy the situation. All types of manufacturing operations can benefit from an effective alarm system, since all manufacturing plants in industries, such as chemical, petrochemical, pharmaceutical, power generation, etc., experience abnormal situations. There is also need for alarm management for economical, safety, and operational purposes. In this work, the application of the alarm system is demonstrated for pharmaceutical manufacturing. The dry granulation line offers interesting challenges and opportunities, since it involves various types of solid processing operations whose characteristics are not well-understood. Brief introductions to the challenges associated with the pharmaceutical industry are outlined in the next section.

3. IAS FRAMEWORK The objective of the IAS framework is to notify operators of abnormal process operations. Figure 2 shows the data-flow in a

2. PHARMACEUTICAL INDUSTRY CHALLENGES The pharmaceutical industry has traditionally used the batch mode of manufacture. This has in part been reinforced by FDA regulations that require the pharmaceutical industry to track products by lots for purposes of quality assurance. Second, there has been lack of continuous processing equipment at the required scale for pharmaceutical manufacturing. Third, there has been a lack of economical and robust online process measurement tools that would allow real time monitoring of process operations, especially powder and granular material properties. The recent initiatives by the FDA relating to PAT and quality by design have encouraged adoption of real time process monitoring and control and, most recently, of continuous manufacturing, at least by the leading members of the pharmaceutical industry.6 Continuous manufacturing in general refers to the mode of process operation in which critical process variables remain essentially constant over an extended period of time. It is the common mode of operation in a large fraction of the process industries because it has substantive advantage over batch manufacturing. It is easier to scale-up, leads to higher equipment and thus capital utilization, is more amenable to process control, and facilitates efficiencies in process integration. Potentially, it also leads to reduction of wasted production by virtue of the mitigation of common cause variations through process control and avoidance of discarding of an entire offspec batch, rather, only the material actually off-spec can be diverted in real time. For these reasons, there has been growing interest in continuous manufacturing in the pharma industry. The feasibility and advantage of various continuous unit operations have been studied and there is recognition that in many, but certainly not all, instances continuous process can be implemented with significant economic benefit.7,8 However, the adaptation to manufacturing of solid oral dosage products offers interesting challenges due to the more complex physics of solid processing, including the potential of powder blends to segregate and powder flows to clump, bridge, and stratify. Moreover, there are limited sensing technologies developed to monitor process variables, such as powder flow and composition, online, which poses challenges for implementing process control. However, over the past few years, there has been considerable work on adaptation of various sensor technologies to measure process variables. In this work, near infrared (NIR) spectroscopy has been used to measure ribbon density and moisture content.

Figure 2. IAS dataflow.

general IAS system. It consists of two important partsThe Ontology for Particulate System (TOPS) and the Exceptional Event Management (EEM) framework. TOPS is the ontological knowledge management framework that serves as the repository for process models, fault signatures, and mitigation strategies associated with the fault. EEM deals with fault detection, diagnosis, and mitigation of conditions that result from special cause process anomalies. The process monitoring is performed through Delta V, which collects online measurements and supplies these to the EEM and TOPS framework. The IAS framework has oversight over the regulatory control and takes action, when the process deviates from NOC despite regulatory control actions. The integrated system that has been developed as part of IAS is described in Figure 3. As shown, all of the process variables being monitored are pretreated for noise removal using wavelet analysis. The denoised data then used in principal component analysis (PCA). If there is a fault in the system, it shows up on the Q-residual and T2 plot, as the values exceed specified upper limits. Also, once a fault is detected in the system, PCA provides a contribution plot, which helps to isolate the fault to the maximum contributing unit operations. If there are various variables contributing significantly to the fault, all the unit operations concerned are declared as faulty. This helps in detection of faults across multiple unit operations. At this point, the diagnosis begins and the qualitative response of the fault signatures is compared against an initial response table (IRT) of known exceptional events, to give us a set of possible candidate faults. The trends of these candidate faults are then screened against the qualitative trend analysis (QTA) patterns stored in TOPS to narrow the candidate list to the most probable faults. Discussion of the SDG/QTA methods can be found in refs 9 and 10. If the fault signatures do not match those stored in TOPS, the fault is declared to be novel and an advisory indicating the need for intervention and further study by the operators and plant engineers is indicated. The initial version of the EEM framework, developed by Hamdan et al.,9 used deviation from set points as metric for fault detection and employed SDG/QTA for fault diagnosis. The advances to that framework, proposed in this study, include the use of wavelet analysis as pretreatment step for denoising the data and facilitating in trend extraction, principal 12359

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

Figure 3. IAS framework.

component analysis (PCA) for fault detection along with alarm thresholds and further expansion of the fault library. A novel IAS framework has been developed with the combination of above-mentioned methodologies, which is able to filter out noise, helping in better detection and diagnosis of fault. The methodologies underlying these additions are discussed next and the advantages they provide to the framework are discussed subsequently. 3.1. Wavelet Analysis. Noise is a phenomenon that appears in a signal as high frequency or spikes due to stochastic variation in variable values. It is known to impact the robustness of the process analysis and decision methods and therefore, it is desirable to extract the true signal from the noise-corrupted data prior to carrying out any detailed analysis. In general, most of the true signal is contained in the low frequency components. Thus, the traditional approach for filtering is to remove the high frequency components above a certain level assuming that these are associated with noise. Exponential and polynomial filters are the most commonly used filtering algorithms in the process industries.11 But, in general these cannot handle signal spike effectively and lead to heavy filtering for poor signal-to-noise (SNR) data. In these respects, they have limitations in online applications. The wavelet transform

addresses some of these limitations and effectively removes high frequency noise as well as sharp spikes in the data, without filtering out important features in the process data. The wavelet transform (WT) acts as a form of mathematical microscope through which different parts of the signals are analyzed by adjusting the focus. The WT can be seen as the correlation between the signal and a set of functions that are small waves, called wavelets. Scaling and translating one original wavelet, called the mother wavelet or basic wavelet, generates each wavelet, also called daughter wavelet. Scaling implies that the mother wavelet is either dilated or compressed and translation implies shifting of the mother wavelet in the time domain. The wavelet family can be defined as ψa , b(t ) =

⎛t − b⎞ 1 ⎟ ψ⎜ | a| ⎝ a ⎠

(1)

where, a is the dilation parameter and b is the translation parameter. A detailed mathematical analysis on wavelet can be found in the monograph by Daubechies.12 Wavelet analysis is powerful tool to decompose the time series into time-frequency space and find its application in various fields. Torrence and Compo13 provide a general overview for wavelet analysis. Angrisani et al.14 has demonstrated the use of wavelets for 12360

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

a Gericke GCM-500 continuous blender. The powder blend is then fed into an Alexanderwerk WP120 × 40 roller compactor that compacts the blend into ribbons. Microcrystalline cellulose (MCC, Avicel PH-200), with a nominal particle size of 180 μm and loose bulk density ranging from 0.29 to 0.36 g/cm3 and Acetaminophen (APAP) are used in these the experiments. The APAP is dried for 24 h at 50 °C before the experiments to improve its flowability, by reducing the moisture content. The operating ranges of all the process parameters are given in Table 1. The process parameters and materials used are the

detection and measurements of transients. There have been applications for process monitoring,11 medical imaging,15 and data denoising16 to name a few. In this work, the wavelets are used as data pretreatment step to remove noise and help in better process monitoring and extraction of trends for diagnosis purpose. Algorithm for Wavelet Denoising. • Decomposition: Apply the appropriate wavelet transform to the given signal, to obtain a set of coefficients. • Detail coef f icients thresholding: Apply required thresholds to the obtained coefficients, to get a new set of thresholded coefficients. • Reconstruction: Compute the signal using the new set of coefficients, by applying inverse wavelet transform, to get the denoised signal. In this work, a biorthogonal wavelet of level 3 is chosen for denoising the data, using a 3-point decomposition and a 7-point reconstruction of the signal (bior 3.7, level 3). A soft thresholding is used. Donoho17 provides a description of the methodology for choosing appropriate threshold values. An example of the denoised signal is given in Figure 4, where the top graph is the original signal and the middle graph is the denoised signal. The bottom graph is the projection of the denoised signal on the original signal.

Table 1. Operating Range and Normal Operating Condition for the Process Variables process parameters feeder A powder wt (kg) feeder A feed rate (kg/h) feeder B powder wt (kg) feeder B feed rate (kg/h) blender speed (rpm) roller compactor (RC) roll gap (mm) RC hydraulic pressure (bar) RC roll speed (rpm) RC feed screw speed (rpm)

operating range 0−14 0−14 0−340 1−3 0−230 3−13 19−102

NOC variable 4.8 variable 11.2 200 2 30 6 26

same throughout the experiments, unless mentioned otherwise. The roll gap for the roller compactor can be controlled using an embedded single input/single output feedback controller. If the roller compactor is operated in open loop, the feed screw speed, hydraulic pressure, and roll speed have to be specified, and the resulting roll gap is measured. Closed-loop operations require that the hydraulic pressure, roll speed, and desired roll gap be specified; feed screw speed is adjusted to maintain the desired roll gap. The roller compactor is connected to a DeltaV distributed control system (DCS) over Ethernet/IP. The DeltaV process historian retains all process data. The ribbon density and moisture content were measured using a Turbido OFS-12S-120H NIR sensor obtained from Solvias AG in Switzerland. A partial least-squares (PLS) calibration model is developed for the spectra to predict ribbon density. The spectrum from each ribbon was exported to Unscrambler X and analyzed using the PCA and PLS methods. The baseline shift is used for the modeling and prediction of density using the original spectra. SNV pretreatment was used to remove the other physical properties information from the spectra for the moisture content measurement. A detailed description of the chemometric model development and its comparison with a microwave sensor is provided by Austin et al.21 Also, alarm thresholds for proper monitoring of the system was determined using the methods reported by Gupta et al.5 4.1. Fault Detection. A PCA model is built using the normal operating condition (NOC) is specified in Table 1. Process variables that have inherently minimal noise levels are excluded from the model development as they would leads to a singular covariance matrix. Consequently, the blender speed (constant at 199.8 rpm) and roll speed (constant at 6 rpm) are excluded while developing the model. It is assumed that any change in these process variables will be immediately detectable. It can be seen from Figure 5, that the first five principal components explain over 90% of the variance. Hence, those five are retained. It also corresponds to the physical situation of the process system, where we have five independent process variables.

Figure 4. Example of a denoised signal.

3.2. Principal Component Analysis (PCA). PCA is based on an orthogonal decomposition of the covariance matrix of the process variables along the directions that explain the maximum variation of the data. It has been widely used for process control and fault detection and diagnosis (FDD). MacGregor et al.18,19 gives an overview of its uses in various process control and FDD schemes. It is a statistical quantitative feature extraction method that is a subset of process history-based methods.20 The main purpose of using PCA is to find factors that have a much lower dimension than the original data set, which can properly describe the major trends in the original data set. Since it does not require an explicit system model and is capable of handling high dimensional and correlated process variables, PCA has proven to be one of the powerful tools for addressing exceptional events. PCA has been used for fault detection and identification through the application of Hotelling’s T2 and Qresidual statistics.

4. EXPERIMENTAL SETUP This study was carried out on a continuous dry granulation line consisting of two Schenck AccuRate PureFeed AP-300 loss-inweight feeders, feeding raw materials (API and excipients) into 12361

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

nine different faults. Some of these faults are briefly discussed in the following. 4.2.1. No Powder Entering Roll Region. This exceptional event pertains to the roller compactor. The operating conditions are kept to NOC with the roll gap maintained at 2 mm. At some point, there is powder begins to form an agglomerate near the nip region, which leads to the formation of a clog in that area, a situation not uncommon when processing powders with tendencies to agglomerate, especially in the presence of moisture. The roll gap decreases due to the absence of powder and to compensate for that the SISO controller increases feed screw speed. The fault constitutes an exceptional event because the regulatory controller is not able to return the process to the NOC. The above scenario was simulated in the lab by letting the powder run out of the hopper and hence emulating the situation of no powder entering the roll region due to clogging. 4.2.2. Controller Failure for Roller Compactor. In this case scenario, the controller is switched off, thus generating a controller malfunction. The case was studied to observe the behavior of the process in absence of controller action and validate the framework for the same. 4.2.3. Consecutive Blockage in Feeder and Roller Compactor. In this case there is blockage in the feeder hopper and this subsequently leads to no powder entering the roller compactor, as the fault progresses to the downstream equipment. In all likelihood, the fault will be detected before the progression but it was studied for comparison of the signal with the next case. A blockage was created in the feeder by inserting a cardboard in the hopper and keeping the system running until there is no powder left, thus, creating a cascading fault. 4.2.4. Simultaneous Blockage in Feeder and Roller Compactor. This case scenario is similar to section 4.2.3 but now a blockage is introduced in the feeder and roller compactor at the same time. Although, this case is less likely to happen, it is considered so as to illustrate the difference in fault signature from the previous case and also to validate the framework for the detection of faults in multiple unit operations. 4.2.5. Moisture Content Deviation. The moisture content of the feed powder was changed abruptly, and the effect on ribbon density was studied. This is in contrast to random variations in moisture content due to localized variations: such common cause variations would be handled by the feedback control system. The case is relevant because in general the moisture content of feed may change depending on changes in the plant

Figure 5. Variance explained by each principal component.

The system is monitored using PCA with the help of Hotelling’s T2 and Q-residual statistics. An upper limit of Hotelling’s T2 is calculated using F-distribution, given by eq 2. Here, k is the number of PCs retained; n is the number of process samples; α represents the confidence limit (CL) and is taken as 95%. A process is considered to be operating in extreme regions if the T2 value is above the upper limit. The Qresidual plot is used to calculate the residuals in the system. The upper limit for the plot is given by eq 3, where λ is the principal component value. The process is considered to be operating in NOC if it is below the upper limit for the Q-residual. TUL 2 =

k(n − 1) Fk , n − k , α (n − k )

(2)

1/ h0 ⎡ c h 2θ θ h (h − 1) ⎤ 2 α 0 ⎥ SPEα = θ1⎢ + 1 + 2 0 02 ⎢⎣ ⎥⎦ θ1 θ1 m

θi =



λji

j=k+1

h0 = 1 −

2θ1θ3 3θ22

(3)

4.2. Exceptional Events. Various exceptional events, arising in the continuous manufacturing line, were considered: they span the range of faults from those affecting only single unit operation to those involving multiple unit operations. Also, a representative fault associated with material properties is also included. For this study, TOPS contained the characteristics of

Figure 6. Q-residual and T2 plot for fault 1no powder entering roll region. 12362

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

fault. It can also be noted that variables from other unit operations have minimal contributions. The fault signatures are depicted in Figure 8. The fault signatures are only shown for the test data set and hence contain only 40 data points. The fault was detected and diagnosed within three seconds (t = 33 s) of its inception, hence preventing a loss of material and avoiding the initiation of an ESD. The black dotted line shows the progression of the fault if it is not mitigated, while the blue solid line shows the signal progression as the event is detected and diagnosed and the mitigation strategy is presented to the operator. It can be seen from Figure 8, that in the absence of suitable supervisory control, the feed screw speed increases to its maximum, in response to the decrease in roll gap, as the controller tries to maintain the set point value. This in reality exacerbates the situation as it pushes more powder toward the nip region, creating a bigger jam. Hence, regulatory control fails to take corrective action in this case. It can also be noted that only the primary variables associated with the faults are presented to the operator. 5.2. Controller Failure. The system is being continuously monitored and the Q-residual and T2 value for this case are shown in Figure 9. Again, the first 180 data points are representative of the training data set and are presented only for reference. The next 45 data points represents the test data set and are plotted on the same figure resulting in discontinuity. The fault is introduced in the system at t = 19 s and the framework was able to detect it in 2 s, at t = 21 s, as can be seen from Figure 9. The contribution plot (Figure 10), presented to the operator, indicates that roll gap and feed screw speed are the main contributors toward the fault. Hence, the fault can be localized to the roller compactor. The fault was diagnosed in 6 s, at t = 25 s, as shown in Figure 11. The solid blue line shows an interruption of the signal at the point of detection and diagnosis. At this point, a mitigation strategy is provided to the operator. In this case, the feed screw speed does not change as the feedback controller is switched off. The roll gap decreases, as it is no longer controlled, due to roll pressure. The density of

environmental conditions. The scenario also serves to illustrate a fault that is based on material property changes.

5. RESULTS AND DISCUSSION 5.1. No Powder Entering Roll Region. The Q-residual and T2 plot are continuously monitored and at t = 31 s, both the Q-residual and T2 value crosses the upper limit, as shown in Figure 6. The time axis for the plot is extended to 220 s, so as to include 180 training data points whose values are shown for a reference. The actual test data contains only 40 data points and are plotted on the same figure for easy visualization of data. Since, training data points are based on NOC condition and are obtained a priori, it creates a discontinuity in the figure, when real time test data are plotted on the same graph with extended time axis. A similar pattern could be observed for T2 and Qresidual plots for all other cases. Once a fault is detected, the contribution plot is presented to the operator (Figure 7) and

Figure 7. Contribution plot fault 1no powder entering roll region.

the fault can be localized to the roller compactor, based on the contribution of feed screw speed and ribbon density toward the

Figure 8. Process variable for fault 1no powder entering roll region. 12363

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

Figure 9. Q-residual and T2 plot for fault 2controller malfunction.

and T2 value for the case is shown in Figure 12. The plot includes the initial 250 training data set. A larger set of training data set was used in this case to account for all the NOC for feeders. Again, the training and test data set are plotted on the same figure for easy visualization of the data, which creates a discontinuity in the figure. The fault appeared in the system at t = 29 s, and the framework was able to detect and diagnose the fault within 5 s. The contribution plot for this scenario is shown in Figure 13, which contains two plots. From the first plot, showing the initial contribution toward the fault, the fault could easily be isolated to feeder B, which is being used for feeding the API. The second contribution plot corresponds to the case when the fault is allowed to progress, resulting in faults arising in multiple unit operations (feeder B and roller compactor). Again the contribution plot could be used to localize the fault, even when there are multiple unit operations that are faulty. The primary variables shown in Figure 14 are presented to the operator. The feed rate of feeder B decreases once there is a bridging in the feeder. The fault progresses to the roller compacter in about 15 s, which is the residence time of the blender. The fault signatures for the roller compactor process parameter are similar to that of case 1, but we have additional

Figure 10. Contribution plot for fault 2controller malfunction.

the ribbons increases as the powders are compacted more tightly. 5.3. Consecutive Blockage in Feeder and Roller Compactor. The fault occurs in a feeder and is allowed to progress to the downstream unit operations. The Q-residual

Figure 11. Process variables for fault 2controller malfunction. 12364

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

Figure 12. Q-residual and T2 plot for fault 3consecutive blockage in feeder and roller compactor.

Figure 13. Contribution plot for fault 3consecutive blockage in feeder and roller compactor.

Figure 14. Process variables for fault 3consecutive blockage in feeder and roller compactor.

in the system at t = 70 s, and the framework is able to detect it within 2 s of its inception. The contribution plot (Figure 16) shows the fault to be present in feeder B and roller compactor, as both of the unit operations have significant contributions toward the fault. From the fault signature for the case is shown in figure 17, it is evident that there is no time delay between the faults occurring in the feeders and roller compactor. Since, the

signatures for feeders that help to distinguish this fault from the other case. 5.4. Simultaneous Blockage in Feeder and Roller Compactor. This case, in contrast to the previous case, involves a simultaneous fault occurring in multiple unit operations. The Q-residual and T2 plot is shown in Figure 15. Again, the training and test data set are plotted on the same figure, hence resulting in discontinuity. The fault is introduced 12365

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

Figure 15. Q-residual and T2 plot for fault 4simultaneous blockage in feeder and roller compactor.

5.5. Moisture Content Deviation. The variation of density with moisture content is shown in Figure 18. The

Figure 18. Variation of ribbon density with moisture.

Figure 16. Contribution plot for fault 4simultaneous blockage in feeder and roller compactor.

“bin” is the powder taken out from the drum and has a moisture content (MC) of 4.92%. “D1Hr” refers to powder dried for 1 h before running the experiments containing 2.15% MC, whereas “D15M” refers to powder dried for 15 min with 4.45% MC. “W13G” corresponds to powder that was humified

framework is based on incipient fault diagnosis, it can easily differentiate this scenario from the previous case.

Figure 17. Process variables for fault 4simultaneous blockage in feeder and roller compactor. 12366

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

mitigation strategies that should, even if implemented so as to require operator approval, lead to faster correction of process abnormalities.

until it absorbed 13 g of water/kg of powder and has a 5.47% MC. It can be noted that the density of the ribbon increases as the moisture content of the feed powder increases. The framework with the capability of monitoring ribbon density and moisture content online is able to detect the change, and since there was no change in the process variable, the fault signatures are easily distinguished. PCA was able to detect faults within 3 s in all the cases. Again, quick detection facilitates quick diagnosis of the process, which in turn prevents the process from producing off-spec product or moving into ESD. Hamdan et al.9 used three consecutive deviations from set-point as the detection mechanism.. Hence, the detection took a minimum of 3 s and also it looked at each variable separately instead of doing a multivariate analysis. The diagnosis for all the cases, using the proposed framework, was found within 7 s, in contrast to 10 s reported in ref 10. This helps in reducing wastage of valuable API feed material in off-spec production or preventing progression to ESD in extreme cases. Hence, the improvement is significant from the point of process safety and quality control. Nevertheless, there are certain limitations associated with the framework. The PCA needs a set of representative normal operating condition data for building a training model. There are multiple tuning parameters that are associated with wavelet analysis, which need to be selected for a given system. However, there are general algorithms that are available to set thresholds for the given system.17 Also, the performance of the framework is highly dependent on the number of process variables that can be measured and used for monitoring the system. Generally, a larger set of measured variables will lead to improvements in detection and diagnosis.



AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. Notes

The authors declare no competing financial interest.



ACKNOWLEDGMENTS This work was entirely funded by the National Science Foundation (NSF) as part of the Engineering Research Center for Structured Organic Particulate Systems (ERC-SOPS).



REFERENCES

(1) Management of alarm systems for the process industries; International Society of Automation (ISA): Research Triangle Park, NC, June 23, 2009. (2) Liu, J.; Lim, K. W.; Ho, W. K.; Tan, K. C.; Srinivasan, R.; Tay, A. The Intelligent Alarm Management System. IEEE 2003, 66−71. (3) U.S. Chemical Safety Board. www.usb.gov (accessed Nov. 2012). (4) Rothenburg, D. H. Alarm Management for Process Control; Momentum Press: New York, 2009. (5) Gupta A.; Giridhar A.; Reklaitis G. V.; Venkatasubramanian V. Intelligent Alarm System applied to continuous pharmaceutical manufacturing. Proceedings of ESCAPE 23, Lappeenranta, Finland, June 9−12, 2013; accepted. (6) Guidance for Industry PATA Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance; U.S. Department of Food & Drug Administration: Silver Spring, MD, Sept. 2004. (7) Mollan M. J., Jr.; Lodaya, M. Continuous Processing in Pharmaceutical Manufacturing. www.pharmamanufacturing.com/ whitepapers/204/11.html (accessed Nov. 2012). (8) Germaey, K. V.; Cervera-Padress, A. E.; Woodley, J. M. A perspective on PSE in pharmaceutical process development and innovation. Comput. Chem. Eng. 2012, 42, 15−29. (9) Hamdan, I. M.; Reklaitis, G. V.; Venkatasubramanian, V. Realtime exceptional event management for a partial continuous dry granulation line. J. Pharm. Innov. 2012, 7 (3), 95−118. (10) Hamdan, I. M.; Reklaitis, G. V.; Venkatasubramanian, V. Exceptional events management applied to roller compaction of pharmaceutical powders. J. Pharm. Innov. 2010, 5 (4), 147−160. (11) Shao, R.; Jia, F.; Martin, E. B.; Morris, A. J. Wavelets and nonlinear principal components analysis for process monitoring. Control Eng. Pract. 1999, 7, 865−879. (12) Daubechies I. Ten lectures on wavelets; SIAM: Philadelphia, 1992. (13) Torrence, C.; Compo, G. P. A practical guide to wavelet analysis; American Meteorological Society: New York, 1997; pp 61−78. (14) Angrisani, L.; Daponte, P.; D’Apuzzo, M. A method for the automatic detection and measurement of transients. Part I: The measurement method. Measurement 1999, 25, 19−30. (15) Zaroubi, S.; Goelman, G. Complex denoising of MRI data via wavelet analysis: Application for functional MRI. Magn. Reson. Imaging 2000, 18, 59−68. (16) To, A. C.; Moore, J. R.; Glaser, S. D. Wavelet denoising techniques with application to experimental geophysical data. Signal Process. 2009, 89, 144−159. (17) Donoho, D. L. De-noising by soft-thresholding. Inf. Theory 1995, 41 (3), 613−627. (18) MacGregor, J. F.; Cinar, A. Monitoring, fault diagnosis, faulttolerant control and optimization: Data driven methods. Comput. Chem. Eng. 2002, 47, 111−120. (19) MacGregor, J. F.; Marlin, T. E.; Kresta, J.; Skagerberg, B. Multivariate statistical methods in process analysis and control. Chem. Process Control 1991, 79−100.

6. CONCLUSIONS In this work, an IAS framework was developed based on the integration of multiple methods and was applied to a continuous dry granulation line. The addition of wavelet analysis and PCA helped to make the framework more robust to noise and facilitated faster detection. Also, the contribution plots helped in localizing the faults to particular unit operations, expediting the diagnosis process. The ontological database which is used to store fault signatures and mitigation strategies is essential to supporting rapid diagnosis. It also helps in capturing knowledge associated with faults and facilitate in the transfer of knowledge to operators. The representative fault scenarios considered illustrate those isolated to single equipment as well as those affecting multiple equipment. From a quality management perspective, the most important faults are those associated with deviations in the material properties of intermediate and final materials. While only moisture driven density deviations were reported, composition and particle size distribution deviations are also important. The principal requirements for addressing these are sensing methods that directly or through virtual sensor models provide measurements of these properties. While in this work, only five fault cases were presented, of the nine contained in TOPS, this set is being expanded over time as operating experience is accumulated. In principle, the size of the library can extend the diagnosis time. However, that effect can be mitigated through the presence of more independent data streams. . In ongoing work, additional sensor technologies such as microwave and X-ray based methods are being investigated. Future work is also directed toward the use of automated 12367

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368

Industrial & Engineering Chemistry Research

Article

(20) Venkatasubramanian, V.; Rengaswamy, R.; Yin, K.; Kavuri, S. N. A review of process fault detection and diagnosis: Part I: Quantitative model-based methods. Comput. Chem. Eng. 2003, 27 (3), 293−311. (21) Austin, J.; Gupta, A.; McDonnell, R.; Reklaitis, G. V.; Harris, M. T. The Use of Near Infrared and Microwave Resonance Sensing to Monitor a Continuous Roller Compaction Process. J. Pharm. Sci. 2013, submitted for publication.

12368

dx.doi.org/10.1021/ie3035042 | Ind. Eng. Chem. Res. 2013, 52, 12357−12368