Prediction of Liquid–Liquid Flow Patterns in a Y ... - ACS Publications

Oct 7, 2016 - Phone: +91-495-2285409. ... The flow pattern map for a liquid–liquid system in a 600 μm circular microchannel was ... Application of ...
0 downloads 0 Views 4MB Size
Subscriber access provided by UNIVERSITY OF CALGARY

Article

Prediction of Liquid-Liquid Flow Patterns in a Y junction circular Microchannel using advanced neural network techniques Giri Nandagopal Mukunthan Sulochana, and Selvaraju Narayanasamy Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.6b02438 • Publication Date (Web): 07 Oct 2016 Downloaded from http://pubs.acs.org on October 11, 2016

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Prediction of Liquid-Liquid Flow Patterns in a Y junction circular Microchannel using advanced neural network techniques. M.S.Giri Nandagopal, N.Selvaraju*

Department of Chemical Engineering, National Institute of Technology Calicut, Kozhikode, Kerala, India. 19th August, 2016

*

Author to whom correspondence should be addressed Email: [email protected] Phone: +91-495-2285409 Fax: +91-495-2287250

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 2 of 53

Graphical Abstract:-

Cascasde Forward Network

Probabilistic Neural Network

Generalized Regression Neural Network Adaptive Neuro Fuzzy Inference System

ANN-Function Fitting

ANN-Pattern Recognition

Liquid-Liquid Flow Pattern Map in Microchannel

ACS Paragon Plus Environment

System Identification

Page 3 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Abstract:The flow pattern map for liquid-liquid system in a circular microchannel of 600µm was experimentally investigated for a varying Y junction confluence angle (10 degree to 180 degree). The experimental results showing distinguishing nature of transition boundaries were established using graphical interpretation. This paper tries to find a better objective flow pattern indicator for vast experimental data. Studies have been carried out using significant feed-forward backpropagation networks and radial basis networks such as ANN-PR, ANN-FF, CFN, PNN, GRNN and ANFIS. From the study we found that GRNN (Generalized Regression Neural Network) showed a better prediction over other prediction techniques. The discrete and continuous time state space models for the system was also developed using system identification technique.

Keywords:- Microchannel, Y Junction, flow pattern, neural networks, slug flow, prediction.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Nomenclature:xi - Weight vector. w0 - Bias term. fk (x) - Probability density function. σ – Smoothness parameter. Di - Distance between the training sample and the point of prediction. x(t) - State vector. y(t) - Output vector. u(t) - Input vector. A - System matrix. B - Input matrix. C - Output matrix. D - Feed forward matrix. K - Disturbance matrix. T - Sampling interval. u(kT) - Input at time instant kT. y(kT) - Output at time instant kT. x(0)- initial states Ij -Relative importance of the jth input variable on the output variable. Ni - numbers of input Nh- number of hidden neurons W-connection weights, superscripts ‘i’, ‘h’ and ‘o’- input, hidden and output layers. subscripts ‘k’, ‘m’ and ‘n’-input, hidden and output neurons.

ACS Paragon Plus Environment

Page 4 of 53

Page 5 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

1.Introduction:Green technology and safe industrial features are the two important issues of concern to the present day chemical engineering. Microfluidic devices and miniaturized microstructure reactor system are found to satisfactorily fulfil these requirements. Micro devices are capable of providing high rate of heat and mass transfer due to its high surface to volume ratio, hence micro heat exchange1, micro heat pipe2, micro reactors34 and many micro analysis chips are used in chemical5 and biological applications6. Microscale devices integrated with microsensors7 are found to be efficient in handling hazardous and highly explosive chemicals.

Even, high

productivity can be achieved for highly exothermic reactions8 by better control. During the last few decades, microstructured devices have been increasingly adopted to explore the complexion of two phase fluids inside microstructures for various research areas9. Studies have encompassed multifacet chemical10, biological, environmental11 and physical applications. In the simultaneous flow of two phases through any conduit, the two fluids can distribute themselves in a wide variety of schemes. Based on the inherent features and hydrodynamic parameters, the distribution of flow can be classified broadly into different flow regimes or patterns. Frequently observed flow patterns in microchannels are slug flow, bubble flow, elongated slug flow and deformed flow. In Slug flow regime, one liquid flows as a continuous phase, while the dispersed phase flows in the form of slugs longer than diameter of the microchannel. This slug formation happens when the surface tension between one of the liquids and the wall material is higher than the interfacial tension between two liquids. The phase with high surface tension flows in the form of enclosed slugs while the other phase flows as a continuous phase forming a thin wall film. This flow pattern occurs at relatively low and approximately equal flow rates of both liquids where interfacial forces dominate1213. When the flow rate of continuous flow is further increased and the flow rate of dispersed fluid is kept at low, the dispersed fluids gets broke up quickly at shorter intervals forming bubbles14. On the other hand, if the flow rate of continuous fluid is kept low and flow rate of dispersed fluid is increased, the dispersed fluid gets detached with longer intervals forming elongated slugs15. On further increasing of the flow rate of dispersed fluid deformed flow occur. In deformed flow, there will be a pronounced deformation of hemispherical caps of the slug. This tend to the development of bridges between adjacent slugs leading to formation of larger slugs by coalescence. This regime is less stable15.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

All these classification of flow pattern is done by visually analyzing the photographic images of flow patterns captured from a high speed camera. But the major complication with the flow visualization technique is that, identification of flow pattern is sometimes confusing. This in turn reflux lack of consistency in the flow pattern terminology and makes the flow pattern study of two phase flow more complex. Critical exploration of two-phase flow phenomena in a microchannel is not possible without the better understanding of flow pattern because many hydrodynamic parameters including holdup16, pressure drop, mixing characteristics etc are influenced by the nature of flow pattern17. From the results obtained from various objective methods such as hot wire anemometry18, conductivity probe technique and radiation attenuation, one can find the distinguishing nature of transition boundaries based on subjective identification because the definitions of the patterns are predominantly based on graphical interpretation and linguistic elucidation. Several researchers attempted to establish a generalized map for the flow pattern including few critical parameters such as pipe diameter, superficial velocity of two inlet fluids, viscosity etc. Intially, Cai et al. classified the flow regimes during air–water two-phase horizontal flow obtained from a set of pattern-sensitive stochastic features derived from absolute pressure signals using the Kohonen self-organizing feature map (KSOFM)19. Tsoukalas et al. classified patterns observed during air–water upflow from the fluctuations of area-averaged void fractions and the probability density functions (PDFs) and power spectral densities (PSDs) of the impedance signal using neurofuzzy system20. Mi et al. used neural network and identified flow patterns observed using channel signals from electrical capacitance probes in a vertical channel with excellent results21. Gupta et al. successfully predicted the attachment rate constant in flotation columns by developing a hybrid method based on four neural networks along with a simple first-principles model22. Yang et al. determined phase transport properties in highpressure two-phase turbulent bubbly flows using neural network by training three backpropagation neural networks with the simulation results of a comprehensive theoretical model23. Malayeri et al. predicted the cross-sectional and the time-averaged void fractions at varying temperatures in air–water two-phase up flow through vertical columns using radial basis function network. They worked with some modified volumetric flow ratio, density difference ratio, and Weber number as the ANN24. Xie et al. classified flow patterns in three-phase gas–liquid–pulp fiber systems using transportable ANN-based technique by designing a three-layer feed-forward

ACS Paragon Plus Environment

Page 6 of 53

Page 7 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

ANN that used seven inputs representing the characteristics of the spectral power density distribution of normalized pressure fluctuations25. H. Sharma et al, attempted to train artificial neural networks (ANNs) to develop an objective flow pattern indicator for air–water flows on the basis of the vast amount of literature data. He investigated three different types of ANN. Feedforward back-propagation (FFBP) technique accurately yielded the flow patterns but failed with the incorporation of transition regions26. The probabilistic neural network (PNN), based on Bayes–Parzen classification theory, gave accurate predictions of flow patterns for different channel diameters27 and inclinations. They validated the results with both experimental and theoretical models available in the literature. Timung et al developed a flow pattern indicator for gas–liquid flow in microchannel with the help of probabilistic neural network (PNN) from literature for air–water and nitrogen–water flow through different circular microchannel. During training, superficial velocity of gas and liquid phase, channel diameter, angle of inclination and fluid properties such as density, viscosity and surface tension have been considered as the governing parameters of the flow pattern28. From the published literatures, one can find no work been reported on the effect of confluence angle on the flow pattern in a liquid- liquid system for a circular microchannel. Considering this research gap and significance of pattern characterization, the present work aims at finding a better objective flow pattern indicator for liquid–liquid system for varying confluence angle. In the present work, we have studied various feed-forward back-propagation (FFBP) network, radial basis network (RBN) and ANFIS (Adaptive Neuro fuzzy Interface System) for the better prediction of flow pattern. 2.Materials and Methods:2.1. Experimentation:The flow pattern study of water-dodecane system was carried out in a Y junction circular microchannel (600 µm) for different confluence angle (10°,20°,30°,….180°). The confluence angle is defined as the angle at which the two inlet fluids collides.

The

schematic representation of experimental setup is depicted in Figure 1. The channels were machined in an epoxy based resin. The Epoxy mould was prepared by mixing epoxy resin (Epofine 1564) with curing agent (Finehard 3486) at a ratio of 100:34 followed by curing at 80°C for 8 h. Bisphenol A diglycidyl ether (DGEBA) based epoxy resin (Epofine 1564) and its amine based curing system (Finehard 3486) was obtained from Fine Finish Organics, India. The

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

epoxy material used in our study is 99% chemically inert, which makes them a superior option for the microchannel construction. The circular microchannels with 600µm in diameter of varying angle of confluence (10°, 20°, 30°,…. 180°) was micro machined on the epoxy molds using Computerized Numerical Control (CNC) machine, BFW India. The inlets and outlet of microchannel were connected to Teflon tubes of 20cms in length and 600µm in diameter. The inlet fluids water (dispersed phase) and dodecane (continuous phase) was pumped into the microchannel using syringe pump for a varying flow rate. A pinch of Methylene Blue (C16H18ClN3S) was added to the dispersed phase for better visibility. Syringe pump used in this study is a high pressure liquid pump of the model Zion/PTPL/41A/0389 from PlenumTech, India. The pump is a continuous type having flow rate range of variable type 0.5–50 ml/h and power: 230 V AC, 50 Hz. The syringe pump is flexible for the syringe sizes: 10, 20, 30, 40, 50 ml with a mechanical accuracy of ±1% and overall accuracy of ±2%. The flow rate of water and dodecane were varied in the order 1ml/hr, 5ml/hr, 10 ml/hr, 15ml/hr, 20 ml/hr……150 ml/hr. The flow pattern generated for varying flow rate ratios of water-dodecane system was visualized using microscopic camera with maximum frames per second of 3000 and resolution of 1024 x 1024 pixels (Axioskop, USA and IDT, UK) in a laboratory computer. The setup is illuminated with LED for better visualization. The pattern were precisely noted and checked for reproducibility.

ACS Paragon Plus Environment

Page 8 of 53

Page 9 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Figure 1:-Experimental setup for studying flow pattern in a 600µm microchannel of varying confluence angle.

2.2 Artificial Neural Network:Artificial neural network (ANN) is a network inspired by the central nervous systems of animals, in particular the brain. ANN are used to estimate or approximate functions that can depend on a large number of inputs that are generally unknown. There are two main types of neural networks, namely recurrent network and feed forward network. In addition to these, there are few special cases of neural networks known as Regression Networks which were implemented using Radial Basis Networks. A recurrent neural network (RNN) is a class of artificial neural network (ANN) where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. This is used to represent a dynamic system. In addition to that, recurrent networks are relatively prone to over fit error due to its interconnection complexity. As, our system is not dynamic, RNN were not considered for this study. However Feed forward network and Radial Basis Networks which can represent static systems were considered for this study.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 10 of 53

2.2.1. Feed-forward back-propagation network:While, feed forward network is the first and simplest type of artificial neural network conceived. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes and to the output nodes. There are no cycles or loops in the network. Unlike RNN, Feed forward networks are less prone to over fit errors. As our system is pattern identification in nature, ANN-Pattern Recognition (ANN-PR) network was preferred for the study. However, advanced feed forward back propagation networks such as ANN-function fitting (ANN-FF)29 and Cascade Forward Network (CFN)30 were employed to obtain a better understanding and prediction of the system. From literatures we understand that ANN-FF and CFN are highly efficient in predicting static systems. 2.2.1.1. ANN-Pattern Recognition (ANN-PR):ANN-PR network is a two-layered feed forward network, with 10 sigmoid hidden neuron and 4 output neuron. The network diagram of the ANN-PR for the system is illustrated in Figure 2. It can ideally classify vectors arbitrarily well, given enough neurons in its hidden layer. For pattern recognition of the problem, a neural network has to classify inputs into a set of target categories. Hence, each pattern was assigned a number as 1, 2, 3, 4 etc. for four flow patterns. If the assigned numbers were chosen in a smaller order like 0.1, 0.2, 0.3, 0.4 or 0.001, 0.002, 0.003, 0.004, etc, the magnitude of the Mean Square Error (MSE) value reduces significantly but the frequency of error is greater. This is because, even a small error value can change the pattern. On the other hand, if the assigned numbers were chosen large value like 10, 20, 30, 40 or 100, 200, 300,400, etc, the value of MSE is very large to work. In this study a neural network for pattern recognition was coded in Matlab 2013. The input data was preprocessed in two steps. In first step, the redundant and repetitive data points were eliminated. While, in second step, the output patterns were converted to their binary codes. Now, the neural network was trained using the 15579 (90% of 17310)

experimental data

obtained from our study on water-dodecane (liquid-liquid) system in a circular micro channel of different confluence angles. Then, the learning algorithm and the number of neurons in the hidden layer were optimized. From the experimental data, 10% (1731) of the data was taken as testing input for pattern prediction. And, the predicted patterns were validated with experimental results.

ACS Paragon Plus Environment

Page 11 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

2.2.1.2. ANN-Function fitting (ANN-FF):The ANN-FF network is a two-layered feed forward network with 10 sigmoid hidden neurons and 1 linear output neuron. In ANN-FF, the relation between input and output is assumed to be a function, which is approximated using the experimental data. The network diagram of the ANN-FF for the system can be found in Figure 2. It can fit multi-dimensional mapping problems arbitrarily well when consistent data and enough neurons are designed in the hidden layer. For function fitting the problem, a neural network is needed to map between a data set of numeric inputs and a set of numeric targets. Hence each pattern was assigned a number as 1, 2, 3, 4 etc due to the same reason mentioned in ANN-PR. In this study a neural network for function fitting was coded in Matlab 2013. Unlike ANN-PR, the input data was preprocessed in a single step. In which, the redundant and repetitive data points were eliminated. Now, the neural network was trained using the 15579 (90% of 17310)

experimental data. Then, the learning algorithm and the number of neurons in the

hidden layer were optimized. The optimized neural network was then tested using 10% (1731) of the data and validated using the experimental findings. The relative importance of each input on pattern were also calculated using the partitioning of connection weights method proposed by Garson et al31. 2.2.1.3. Cascade forward Network (CFN):Cascade Forward Network (CFN) is similar to feed forward neural network but include a connection from the input and every previous layer to following layers. The concept underlying CFN is being discussed. The first step is to construct the cascade architecture by adding new neurons together with their connections to all the inputs as well as to the previous hidden neurons. This configuration is kept unchanged at the following layers. The second approach is to minimize the residual error of the network by learning only the newly created neuron by fitting its weights. The new neurons are added to the network while its performance increases. So, the cascade-correlation technique assumes that all m variables x1,…, xm characterizing the training data are relevant to the classification problem. At the beginning, a cascade network with m inputs and one output neuron starts to learn without hidden neurons. The output neuron is connected to every input by weights w1,…, wm, which can be adjusted during learning. The

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 12 of 53

output y of neurons in the network is represented by the standard sigmoid function f as in equation 1, y = f ( x; w ) =

1 m    1 + exp  − w0 − ∑ wi x i   i   

(1)

where x = (x1, ..., xm) is a input vector, w = (w1,…, wm) is a m xi weight vector and w0 is the bias term. The basic architecture of CFN is shown in Figure 2. The architecture of CFN consists of four layers. An input layer from which the inputs are fed to the hidden layer. The hidden layer neurons work with a sigmoidal transfer function. The number of neurons can be regulated based on the complexity of the system. For more complex systems more neurons are used. In our study, 20 sigmoid neurons in hidden layer and 4 sigmoid neurons in output layer was used, because more than that there was no observable improvements in the results. The third layer is an output class layer with neurons of sigmoidal transfer function. The number of neurons of the output layer equals the number of outputs. The exceptional character of the CFN is that there is a feed coming to each neuron from every input and each neuron in the previous level. In this study a cascade neural network for pattern recognition was coded in Matlab 2013. Similar to ANN-PR, the input data was preprocessed in two steps. Now, the neural network was trained using the 15579 (90% of 17310). Then, the learning algorithm and the number of neurons in the hidden layer were optimized. From the experimental data, 10% (1731) of the data was taken as testing input for pattern prediction. And, the predicted patterns were validated with experimental results.

ACS Paragon Plus Environment

Page 13 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Figure 2:- Network Architecture for Feed-forward back-propagation Networks (a) ANN-PR, (b) ANN-FF, (c) CFN.

2.2.2 Radial Basis Network:Radial Basis Network (RBN) is a special classification of ANN. RBN is widely used in function approximation, time series prediction, classification, and system control. In RBN, The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Two established RBN techniques are Probabilistic Neural Network (PNN) and Generalized Regression Neural Network (GRNN). Hence, we have adopted both RBN techniques for comparing their prediction efficiency.

2.2.2.1. Probabilistic Neural Network (PNN) A probabilistic neural network (PNN) is a feed forward neural network based on Bayes– Parzen classification theory. PNN was introduced by Donald.F.Specht

32

. He disintegrated

Bayes–Parzen classifier into numerous unit processes, and made them perform in a multilayer neural network. These multilayer neural network can run each individual process self-reliant in

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 14 of 53

parallel. Here we give a brief note on the concept of both Bayes’ theorem for conditional probability and employing Parzen’s method to estimate probability density function (PDF). For better understanding of Bayes’ classification theory, consider a set of samples, X = [x1, x2, . . ., xp] acquired from any source, and these sources belongs to number of different classes (1, 2,. . ., k,. . ., K). Now, let the probability of the sample belonging to kth class be ‘hk’, and the cost corresponding for misclassification of that sample is ‘ck’. Now, if it is presumed that the true probability density function of all classes namely f1(x), f2(x),. . ., fk(x),. . ., fK(x) are familiar, then the unknown sample is classified by Bayes theorem into ith class if (equation 2),

hi ci f i (x) > h j c j f j (x)

(2)

where j ≠ i The probability density function fk(x) constitute the population distribution density of class ‘k’ samples around an unknown sample. The vital trouble with the Bayes’ classification theory is probability density function fk(x) in most of the case will be unknown. And, prior understanding regarding the sample distribution is necessary. The Gaussian distribution is assumed in most cases. But, if there is a large deviation between the assumed Gaussian distribution and the true distribution sample large scale misclassification of samples happen. The basic architecture of PNN has been shown in Figure 3. The figure shows that a PNN is composed of four layers namely input layer, radial basis layer, competitive layer and output layer. The input nodes does not perform any computational operation and thus simply pass out the inputs of random variables ‘x’ to each neuron in radial basis layer. The radial basis layer consists of neurons equal to the total number of training variables, here in our study the number of training variables is 15579. The radial basis layer on receiving the input from input layer calculates the Euclidean distance between each random variable and the training set of data. The resulting data is passed to the competitive layer. For PNN networks there is one competitive neuron for each category of the target variable. The actual target category of each training case is stored with each hidden neuron; the weighted value coming out of a hidden neuron is fed only to the pattern neuron that corresponds to the hidden neuron’s category. The pattern neurons add the values for the class they represent. Hence, it is a weighted vote for that category. For PNN networks, the decision layer compares the weighted votes for each target category accumulated in the pattern layer and uses the largest vote to predict the target category.

ACS Paragon Plus Environment

Page 15 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

This is the network diagram of the probabilistic neural network used to recognize the flow pattern as a pattern recognition problem. The network consists of as many radial basis neurons as there are neurons used for training, in this case 15579, it also has 4 competitive layer neurons. Same as pattern recognition problems, a neural network is required to classify inputs into a set of target categories. Here also the targets are set to numbers 1, 2, 3, 4 etc. These are converted as binary patterns before they are taken in the network. These numbers are taken as their own binary code ie, 0001, 0010, 0011, 0100. In this study PNN networks was developed in Matlab 2013. PNN was trained using the experimental data. Selection of an optimum value of smoothing parameter or spread constant (σ) is very important as the shape of Gaussian function is completely based on this constant. The spread constant which gives minimum prediction error during training and test of network is considered as an ideal spread constant. Various algorithm and techniques such as genetic algorithm, trial and error method etc have been reported, to estimate the optimum spread constant. In the present study, σ has been selected based on trial and error method as it involves simple operation processes. So, a number of tests have been conducted on a set of flow pattern data with different values of spread constant. The value of ‘σ’ which produces the minimum misclassification along with good training (if 0.9≤R≤1) has been chosen as the optimum spread constant. The trained PNN was then tested with 10% of data ie 1731 and flow pattern was predicted.The overall accuracy of each confluence angle was also studied.

2.2.2.2. Generalized Regression Neural Network (GRNN) Generalized Regression Neural Network (GRNN) was proposed by Donald F. Specht 33. Like PNN, GRNN also requires only a fraction of the training samples which a backpropagation neural network would need. The probability density function used in GRNN is the Normal Distribution. Each training sample, Xj, is used as the mean of a Normal Distribution as in equation 3 and 4.  − Di 2  Y exp  2 ∑ i  2 σ i =1   Y(X ) = n 2  − Di  exp 2 ∑ 2σ  i =1  n

ACS Paragon Plus Environment

(3)

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Di = ( X − X i ) ( X − X i ) 2

T

Page 16 of 53

(4)

Here Di is the distance between the training sample and the point of prediction. Di is used as a measure of accuracy for the training sample to represent the position of prediction X. When the Di is small, exp (-Di2/2σ2) becomes large. For Di is zero, exp (-Di2/2σ2) becomes one and the evaluation point is represented by this training sample. If the distance Di becomes larger the term exp (-Di2/2σ2) become smaller and hence the contribution of the other training samples to the prediction is relatively small. While,Yi exp (-Di2/2σ2) term for the ith training sample is the largest one and contributes a major role for the prediction. For a larger standard deviation or the smoothness parameter ‘σ’, the representation of point of evaluation is possible for a wideer range of X. And, for a small value of ‘σ’ the representation is limited to a narrow range of X. The network architecture of GRNN has been shown in Figure 3. The figure shows that a GRNN is composed of four layers namely input layer, radial basis layer, special linear layer and output layer. The input nodes does not perform any computational operation and thus simply pass out the inputs of random variables ‘x’ to each neuron in radial basis layer. The radial basis layer consists of neurons equal to the total number of training variables, here in our case it is 15579. The hidden layer on receiving the input from input layer calculates the distance between the training sample and the point of prediction. The next layer is special linear layer, which is different for PNN networks. For GRNN networks, there are only two neurons in the special linear layer. One neuron is the denominator summation unit the other is the numerator summation unit. The denominator summation unit adds up the weight values coming from each of the hidden neurons while the numerator summation unit adds up the weight values multiplied by the actual target value for each hidden neuron. The output layer of GRNN differs from PNN. For GRNN networks, the output layer divides the value accumulated in the numerator summation unit by the value in the denominator summation unit and uses the result as the predicted target value. In this study GRNN networks was developed in Matlab 2013. GRNN was trained using 15579 experimental data. Similar to PNN, optimum value of smoothing parameter or spread constant (σ) was selected using trial and error method. The trained GRNN was then tested with 10% of data ie 1731 and flow pattern was predicted. The overall accuracy of each confluence angle was also studied.

ACS Paragon Plus Environment

Page 17 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Figure 3:- Network Architecture for Radial Basis Networks (a) PNN, (b) GRNN.

2.3.Adaptive Neurofuzzy Inference system Fuzzy logic is used to study various input parameters of the real time systems and successfully applied in the control engineering field. This fuzzy logic combined with neural networks yields very significant results. Neural networks can learn from data but it is usually difficult to have a better understanding about the meaning associated with each neuron and each weight. In contrast to neural networks, fuzzy rule based models are easily understandable because fuzzy uses linguistic terms and the structure of IF-THEN rules. Unlike neural networks, fuzzy logic cannot learn by itself. The learning and identification of fuzzy logic systems need to adopt techniques from other areas, such as statistics, system identification etc. Since neural networks have the ability to learn, it is natural to merge these two techniques. This merged technique of the fuzzy logic and the learning power of the neural networks is called as neuro fuzzy networks. The structure of ANFIS contains the same components as the Fuzzy Inference System, expect for the neural network block. The network architecture of ANFIS is portrayed in Figure 4. It is composed of a set of units arranged into five connected network layers,

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 18 of 53

Figure 4: Network Architecture for ANFIS Layer 1: This layer consists of input variables (membership functions). Here superficial velocity of dodecane, superficial velocity of water and confluence angle of microchannel are the input variables. Bell shaped membership function is used in this study. This layer supplies the input values xi to the next layer, where i = 1to n. Layer 2: This layer is called membership layer. It checks for the weights of each membership function. This layer receives the input values xi from the first layer and act as membership functions to represent the fuzzy sets of the respective input variables. It also computes the membership values which specify the degree to which the input value xi belongs to the fuzzy set. This acts as the inputs to the next layer. Layer 3: This layer is called as the rule layer. In each node (neuron) in this layer, pre-condition matching of the fuzzy rules were performed. They compute the activation level of each rule, the number of layers being equal to the number of fuzzy rules. Each neurons of these layers calculates the weights. These weights are further are normalized. Layer 4: This layer is called as the defuzzification layer. It provides the resulting output values y from the inference of rules. Connections between the layer 3 and layer 4 are weighted by the fuzzy singletons that represent another set of parameters for the neuro fuzzy network. Layer 5: This layer is called as the output layer. The output layer sums up all the inputs coming from the layer 4 and converts the fuzzy classification results into a binary. The structure of ANFIS is automatically tuned by least-square estimation & the back propagation algorithm. A fuzzy set A of a universe of discourse X is represented by a collection

ACS Paragon Plus Environment

Page 19 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

of ordered pairs of generic element and its membership function µA(x) : X tends to [0 1], which associates a number µA(x) : X tends to [0 1], to each element x of X. Fuzzy logic controller works based on a set of control rules called as the fuzzy rules among the linguistic variables. These fuzzy rules are represented in the form of conditional statements. The basic structure of the pattern predictor model developed using ANFIS predicts the pattern of the flow regime, consists of 4 important parts, viz., fuzzification, knowledge base, neural network and the de-fuzzification blocks, which has been put in Figure 5. The inputs to the ANFIS system are confluence angle, superficial velocity of water and superficial velocity of dodecane, which are fed to the fuzzification unit which converts the binary data into linguistic variables, which is given as inputs to the rule base block. The Anfis tool in matlab, developed 216 rules while training the neural network. The rule base block is connected to the neural network block. Back propagation algorithm is used to train the neural network and to select the proper set of rule base. For developing the pattern prediction model, training is an important step in the selection of the proper rule base. Once the proper rule base is selected, Anfis model will be ready to carry out prediction. The trained Anfis is validated using 10% of data. The output of the neural network unit is given as input to the de-fuzzification unit and the linguistic variables are converted back into the numeric form of data in the crisp form.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 20 of 53

Figure 5:- Block Diagram of ANFIS system for pattern prediction

2.5.System Identification:System Identification is a technique used to model unknown systems. System identification tool fits the input output data into existing models in literature and finds out the fit between the model generated and the actual data. Depending upon the specific system identification tool used, a particular system is modelled into linear, transfer function, non linear differential equation, state space, etc. Here, we use Matlab 2013 to model the present system into a state space model. State-space models are models that use state variables to describe a system by a set of first-order differential or difference equations, rather than by a single nth-order differential or difference equations. State variables x(t) can be reconstructed from the measured input-output data, but may not themselves be measured during an experiment. There are many forms of representation of a state space model viz. Canonical form, State observable form, State Controllable form, Innovations form etc. This model is extensively used to represent electrical or electronic systems such as an induction motor. The State Space model prevents the need of

ACS Paragon Plus Environment

Page 21 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

repeating experiments, it also makes the model easy to be modified by a digital device such as a computer. We used Innovations form for representing our system as it gave the best agreement with the experimental data. A typical state space model in Innovations form can be represented using the following set of equations represented in equation 5 and 6, dx = Ax (t ) + Bu (t ) + Ke (t ) dt

(5)

y ( t ) = Cx (t ) + Du (t ) + e ( t )

(6)

where x(t) is state vector, y(t) is output vector, u(t) is input vector, A is system matrix, B is input matrix, C is output matrix, D is feed forward matrix, K is disturbance matrix. The model order is an important parameter to be determined and is an integer equal to the dimension of x(t). The state-space model is estimated for our system using Matlab 2013. It is often easier to define a state-space model in continuous time because physical laws are most often described in terms of differential equations. In continuous time, the state-space description has the following form as in equation 7,8 and 9: x (t ) = Fx (t ) + Gu (t ) + Kw (t )

(7)

y (t ) = Hx (t ) + Du (t ) + w (t )

(8)

x (0 ) = 0

(9)

The matrices F, G, H, and D contain elements with physical significance—for example, material constants, x(0) specifies the initial states. Here we have estimated continuous-time statespace model and discrete-time state-space model using time-domain data. The discrete-time state-space model structure is often written in the innovations form as in equation 10,11 and 12. x (kT + T ) = Ax (kT ) + Bu (kT ) + Ke (kT y (kT ) = Cx (kT ) + Du (kT ) + e (kT

)

)

(10) (11)

x (0 ) = x 0

(12)

where T is the sampling interval, u(kT) is the input at time instant kT, and y(kT) is the output at time instant kT. The relationships between the discrete state-space matrices A, B, C, D, and K and the continuous-time state-space matrices F, G, H, D, and κ are given for piece-wise constant input, as represented in equation 13,14 and 15: A = e FT

(13)

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 22 of 53

T

B = ∫ e Fτ Gdτ

(14)

C=H

(15)

0

These relationships assume that the input is piece-wise-constant over time intervals as in equation 16, k (T ) ≤ t ≤ (k + 1)T

(16)

The exact relationship between K and κ is complicated. However, for short sampling intervals T, the following approximation works well as in equation 17: T

K = ∫ e Fτ kdτ

(17)

0

In this study, from Matlab-System Identification tool, state space model was selected to fit the input and output data. The order of the state space model was set to 4. Then, using N4SID algorithm, the data was fitted. The initial states were taken as zero. Both discrete and continuous type models were generated. As Innovations form demands a noise matrix. A noise matrix ‘k’ was selected for estimation. Model thus defined was estimated. 3. Results and Discussions 3.1. Experimental Results:The flow pattern of dodecane (carrier fluid)-water (dispersed fluid) in a 600µm channel has been plotted for varying angle of confluence in a Y shaped Microchannel. The flow pattern for different superficial velocity of dodecane and water has been represented graphically in Figure 6. Four types of flow pattern was observed namely slug flow, bubble flow, deformed flow, elongated slug. Each pattern has been assigned a numerical value and symbol as in Table 1.

Flow pattern

Numerical value

Slug Flow

1

Bubble Flow

2

Deformed Flow

3

Elongated Slug Flow

4

Symbol assigned

Table 1:-Numerical value and symbol assign for different flow patterns.

ACS Paragon Plus Environment

Page 23 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

In Slug flow regime, one liquid flows as a continuous phase (dodecane) while the dispersed phase (water) flows in the form of slugs longer than diameter of the microchannel. This slug formation happens when the surface tension between one of the liquids and the wall material is higher than the interfacial tension between two liquids. The phase with high surface tension flows in the form of enclosed slugs while the other phase flows as a continuous phase forming a thin wall film. This flow pattern occurs at relatively low and approximately equal flow rates of both liquids where interfacial forces dominate. This is a commonly observed stable flow regimes in microchannel12. Bubble flow regime is obtained when the flow rate of continuous flow is further increased while the flow rate of dispersed fluid is kept at low level. The continuous phase flows with the dispersed phase flowing inside as bubbles13. On the other hand, if the flow rate of continuous fluid is kept low and flow rate of dispersed fluid is increased, elongated slugs of dispersed fluid is formed. On further increasing of the flow rate of dispersed fluid deformed flow occur. In deformed flow, there will be a pronounced deformation of hemispherical caps of the slug. This tend to the development of bridges between adjacent slugs leading to formation of larger unstable slugs by coalescence15. The experimental results on flow pattern for a varying confluence angle of Y junction show that confluence angle do effect the flow pattern formation in the microchannel. The change in flow pattern for varying confluence angle has been portrayed in Figure 6 (i) to Figure 6 (xviii). The results show that the formation of bubble increased when the confluence angle was increased from 10 degree to 180 degree. While the deformed flow decreased. Literatures show that among the three stages of slug formation such as dripping, jetting and squeezing. The squeezing plays a more significant role in deciding the flow regime in microchannels. Hence, one can understand that changing confluence angle could influence the squeezing mechanism of flow pattern formation.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

Superficial Velocity of water (m/s)

(i) 10 Degree 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.02

Slug

0.04 0.06 0.08 Superficial Velocity of Dodecane (m/s) Bubble

Deformed

0.1

Elongated SLug

(ii) 20 Degree

0.09 Superficial Velocity of water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 24 of 53

0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.02

0.04 0.06 0.08 Superficial Velocity of Dodecane (m/s) Slug Bubble Deformed Elongated Slug

ACS Paragon Plus Environment

0.1

(iii) 30 Degree

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Superficial Velocity of Water (m/s)

Page 25 of 53

0.01

0.02 0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Slug

Bubble

Deformed FLow

0.08

0.09

0.08

0.09

Elongated Slug

(iv) 40 Degree

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02 0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s) Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

(v) 50 Degree

Flow rate of Water (ml/hr)

160 140 120 100 80 60 40 20 0 0

Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 26 of 53

20

40

60 80 100 120 Flow rate of Dodecane (ml/hr)

Slug

Bubble

Deformed Flow

140

160

Elongated Slug

(vi) 60 Degree

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02 0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s) Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

0.08

0.09

Page 27 of 53

Superficial Velocity of Water (m/s)

(vii) 70 Degree 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

0.09

0.08

0.09

Elongated Slug

(viii) 80 Degree

0.09 Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

ACS Paragon Plus Environment

Elongated Slug

Page 28 of 53

(ix) 90 Degree

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

0.09

Elongated Slug

(x) 100 Degree Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Superficial Velocity of Water (m/s)

Industrial & Engineering Chemistry Research

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

0.08

0.09

Page 29 of 53

(xi) 110 Degree

Superficial Velocity of Water (m/s)

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

0.02

Slug

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

Elongated Slug

(xii) 120 Degree Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02 0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s) Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

0.08

0.09

Industrial & Engineering Chemistry Research

(xiii) 130 Degree Superficial Velocity of Water (m/s)

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 30 of 53

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

0.09

0.08

0.09

Elongated Slug

(xiv) 140 Degree

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

Page 31 of 53

(xv)150 Degree

Superficial Velocity of water (m/s)

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

0.09

0.08

0.09

Elongated Slug

(xvi) 160 Degree Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

Elongated Slug

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

Superficial Velocity of Water (m/s)

(xvii) 170 Degree 160 140 120 100 80 60 40 20 0 0

20

Slug

40 60 80 100 120 Superficial Velocity of Dodecane (m/s) Bubble

Deformed Flow

140

160

Elongated Slug

(xviii)180 Degree Superficial Velocity of Water (m/s)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 32 of 53

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0

0.01

Slug

0.02

0.03 0.04 0.05 0.06 0.07 Superficial Velocity of Dodecane (m/s)

Bubble

Deformed Flow

0.08

0.09

Elongated Slug

Figure 6:- Flow pattern map for Dodecane-Water system in a 600 µm channels of varying confluence angle such as: (i) 10 Degree (ii) 20 Degree (iii) 30 Degree (iv) 40 Degree (v) 50 Degree (vi) 60 Degree (vii) 70 Degree (viii) 80 Degree (ix) 90 Degree (x) 100 Degree (xi)

ACS Paragon Plus Environment

Page 33 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

110 Degree (xii) 120 Degree (xiii) 130 Degree (xiv) 140 Degree (xv) 150 Degree (xvi)160 Degree (xvii) 170 Degree (xviii) 180 Degree.

3.2. Feed-forward back-propagation network:3.2.1. Selection of backpropagation training algorithm To determine the best backpropagation (BP) training algorithm, twelve backpropagation algorithms were studied. Table 2 shows a comparison of different backpropagation training algorithms for ANN-PR, ANN-FF and CFN. From the tabulated results one can understand that, Resilient- backpropagation algorithm showed best results for ANN-PR, ANN-FF and CFN with R(residual) value of 0.95288, 0.9002 and 0.957 respectively. The MSE (mean square error) value of ANN-PR, ANN-FF and CFN are found to be 0.0170, 0.0179 and 0.0175 respectively for Resilient- backpropagation algorithm which is lower when compared to other algorithms. Hence, in this study, Resilient- backpropagation algorithm was preferred.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

Back Propogation Algorithm

Function

MSE

Page 34 of 53

Epoch

Residual Value (R)

ANN-PR

ANN-FF

CFN

ANN-PR

ANN-FF

CFN

ANN-PR

ANN-FF

CFN

Resilient backpropagation

trainrp

0.0167

0.011012

0.0175

332

744

799

0.957

0.9002

0.95288

Levenberg-Marquardt

trainlm

0.0160

0.0978

0.0179

573

135

89

0.955

0.87

0.951

trainbr

0.018

0.079

0.0183

249

1000

42

0.943

0.89

0.943

trainbfg

0.107

0.083

0.037

22

86

260

0.695

0.85

0.91

trainscg

0.044

0.122

0.086

168

136

245

0.876

0.827

0.761

traincgb

0.104

0.1008

0.1028

13

114

189

0.697

0.8713

0.735

backpropagation Bayesian regularization backpropagation BFGS quasi-Newton backpropagation Scaled Conjugate Gradient Back Propogation Conjugate gradient backpropagation with

7

Powell/Beale restarts Conjugate gradient

traincgf

0.09

0.25122

0.089

61

63

354

0.6964

0.66988

0.775

traincgp

0.1

0.10048

0.4498

23

114

12

0.6955

0.88

0.318

backpropagation with Fletcher-Reeves updates Conjugate gradient backpropagation with PolakRibiere updates

ACS Paragon Plus Environment

Page 35 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

One-step secant

Industrial & Engineering Chemistry Research

trainoss

0.09

0.43488

0.104

25

32

361

0.6963

0.07

0.701

traingdx

0.0827

0.18

0.7488

141

75

7

0.7397

0.77

0.266

backpropagation Gradient descent with momentum and adaptive

2

learning rate backpropagation Gradient descent with

traingdm

0.357

3.33

0.4847

1000

6

6

0.5507

9.8 x 10-5

0.189

traingd

0.1167

0.1683

0.6050

34

1000

6

0.728

0.77

0.612

momentum backpropagation Gradient descent backpropagation

1 Table 2:- Comparison of 12 backpropagation Algorithms for ANN-PR, ANN-FF, CFN.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 36 of 53

3.2.2. Optimization of number of neurons:The number of neurons in the hidden layer has to be optimized for better performance of neural network. The optimum number of neurons was determined based on the minimum value of MSE and maximum value of R (residual value) of the training and prediction set. The optimization was done by using Resilient- backpropagation as a training algorithm and varying neuron number from 1 to 26. Figure 7 and 8 shows the effect of the number of neurons in the hidden layer on the performance of the neural network based on MSE value and R value for ANN-PR, ANN-FF, CFN. For optimizing the number of neurons for CFN, The number of neurons were increased from 2 to 26. A higher MSE value of 0.072 was observed when two neurons were used which further decreased to 0.01757 when the number of neurons was increased to 20. Increasing of neurons more than 20 did not significantly decrease MSE. In fact it showed higher MSE values due to overfitting. This finding was supported by an increased R value of 0.95288. Hence, 20 neurons were used for CFN. While, for ANN-FF technique, MSE value decreased from 0.03931 for one neuron to 0.0110 for 10 neurons. Further, increasing of neurons did not significantly show any decrease in MSE value. In fact, MSE value increased due to over fitting. Hence, 10 neurons were selected for the study. This selection of neuron number was supported by R value too. R value increased from 0.7848 for one neuron to 0.9002 for 10 neurons. The R value dropped down for further increase in number of neurons. ANN-PR too showed least MSE value of 0.0167 and high R value of 0.957 for 10 neurons. Hence, same as ANN-FF, 10 neurons were used for ANN-PR.

ACS Paragon Plus Environment

0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0

5

10 15 Number of Neurons CFN

ANN-FF

20

25

ANN-PR

Figure 7:-Effect of the number of neurons in the hidden layer on the performance of the neural network based on MSE value for CFN,ANN-FF,ANN-PR

1.2 Residual Value (R)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Mean Square Error (MSE)

Page 37 of 53

1 0.8 0.6 0.4 0.2 0 0

5

10 15 Number of Neurons CFN

ANN-FF

20

25

ANN-PR

Figure 8:-Effect of the number of neurons in the hidden layer on the performance of the neural network based on R value for CFN,ANN-FF,ANN-PR.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 38 of 53

3.2.3. Test and validation of the model After the optimization of the neural network, 1731 (10% of 17310) data points were feed to the optimized network to test and validate the model. Figure 9 shows a comparison between experimental patterns observed and values predicted using the neural network model. The figure contains the best fit indicated by a solid line. The validation results show that ANN-PR, ANN-FF and CFN achieved R2 values of 0.8383, 0.8864 and 0.9534 respectively. From this one can understand that CFN has a better prediction performance over other feed forward backpropagation neural network techniques. 3.2.4. Relative Importance of Input variables The neural network weight matrix can be used to assess the relative importance of the various input variables on the output variables. In our study ANN-FF is used for finding out the relative importance of the input variables, as ANN-FF is the only system in which the number of output neurons is one. Garson et al.31 proposed an equation based on the partitioning of connection weights:

∑ [(W

m= N h

Ij =



ih jm



[∑ (W

m =1 k = Ni

m= Nh

k =1

m =1

ih km

Ni k =1

)

ho Wkmih × Wmn



Ni

]

)

ho Wkmih × Wmn k =1

]

(18)

where Ij is the relative importance of the jth input variable on the output variable. Ni and Nh are the numbers of input and hidden neurons respectively. W is the connection weights, while the superscripts ‘i’, ‘h’ and ‘o’ refer to input, hidden and output layers, respectively, and subscripts ‘k’, ‘m’ and ‘n’ refer to input, hidden and output neurons, respectively. The relative importance of various input variables as calculated by equation 18 is shown in Table 3. From the table one can understand that confluence angle has 19.16% impact on flow pattern formation while superficial velocity of dodecane and water has a relative impact of 43.22% and 37.62% respectively. Input variable

Importance (%)

Confluence Angle

19.1567

Superficial Velocity of Dodecane

43.2235

Superficial Velocity of Water

37.6198

Total

100

ACS Paragon Plus Environment

Page 39 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Table 3:- Relative importance of input variables on the flow pattern formation

3.3.Radial Basis Network:The experimental observations of dodecane–water flow in a microchannel with 600µm in diameter for different confluence angles (100,200,300…1800) and has been used for training and testing of the network. Out of 17310 experimental data 15579 data (90%) was used for training and remaining 1731 data (10%) were used for testing. The plot for spread constant and mean square error (MSE) can be found in Figure 10. From the results one can find that, for PNN, the least MSE value of 0.0246 was obtained at a spread constant of 0.001. For higher and lower values of spread constant, ‘σ’ the MSE value was found to be higher. Similarly, for GRNN, the least MSE value of 0.0``246 was got with spread constant of 0.001. Hence, spread constant of 0.001, has been selected for both PNN and GRNN study considering their lowest MSE values. The error distribution of each confluence angle was analyzed separately. Since error has to be found out in confluence angle individually, flow patterns were separated under each confluence angles (around 961 each). Among them, a few (96 values) were selected apart randomly to check the predicted values. A new network was trained for each angle separately with its corresponding data and validated from the reserved readings. The MSE value for the individual confluence angle was plotted (Figure 11). From the results, we observe that, the MSE values of PNN was greater than the GRNN in most of the confluence angles The accuracy of the radial basis predictors for each pattern was studied and represented graphically in Figure 12. The results showed that PNN had a percentage accuracy of 94.29%, 96.36%, 92.44% and 80% for the flow patterns 1,2,3 and 4 respectively. While, GRNN had a percentage accuracy of 97.41%, 99.2%, 94.56% and 82% for flow pattern 1,2,3 and 4 respectively. From this results, one can understand that GRNN had a better accuracy when compared to PNN for the prediction of flow patterns. Testing and validation was carried out by feeding 1731 (10% of 17310) data points to the network. Figure 9 shows a comparison between experimental patterns observed and values predicted using the neural network models. The figure contains the best fit indicated by a solid line. The results show that PNN produce a R2 value of 0.9766, while GRNN has 0.988 showing GRNN to be better predictor of flow pattern among two radial basis networks.

ACS Paragon Plus Environment

4.5 4 3.5 3 2.5 2 1.5 1 0.5 0

Page 40 of 53

ANN-PR:- R2=0.8383 ANN-FF:- R2=0.8864 CFN:-R2= 0.9534 PNN:- R2=0.9766 GRNN:- R2=0.988 ANFIS:- R2=0.7764

0

0.5

1

1.5 2 2.5 3 Flow Pattern Experimental

3.5

4

GRNN

PNN

CFN

ANN-FF

ANN-PR

ANFIS

Linear (GRNN)

Linear (PNN)

Linear (CFN)

Linear (ANN-FF)

Linear (ANN-PR)

Linear (ANFIS)

Linear (ANFIS)

Figure 9:- Comparison of the experimental results with those calculated using CFN,ANN-FF, ANN-PR, PNN,GRNN and ANFIS.

MEAN SQUARE ERROR (MSE)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Flow Pattern Predicted

Industrial & Engineering Chemistry Research

0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0 0. 00010.0 005 0. 001 0. 00 2 0.0 0 5 0 .0 07 0 .0 09 0 .01 SPREAD CONSTANT PNN

0.1

1

GRNN

Figure 10:-Effect of spread constant on the neural network performance for PNN and GRNN.

ACS Paragon Plus Environment

4.5

Page 41 of 53

Mean Square Error (MSE)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

50

100 Confluence Angle (Degrees) PNN

150

200

GRNN

Figure 11:-Mean Square Error of each confluence angle for PNN and GRNN.

120

100

OVERALL (%) ACCURACY

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

80

60

40

20

0 1

FLOW 2 PATTERN

PNN

3

GRNN

Figure 12:- Overall Percentage Accuracy of each pattern for PNN and GRNN.

ACS Paragon Plus Environment

4

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 42 of 53

3.4. Adaptive Neuro-fuzzy Inference system The prediction results of Anfis using 10% of data are graphically represented in figure 13,14,15. The figures portray the relationship between input1, input 2 and input 3 with outputs. Throughout the study confluence angle, superficial velocity of water and superficial velocity of dodecane was taken as input1, input 2 and input 3 respectively. Figure 13 shows the relationship between input 1, input 2 and output. The predicted pattern tends to follow the experimental flow pattern. But in case of input 1, input 3 and output which has been represented in Figure 14, the model was unable to follow desired output pattern as the pattern 4 was never predicted. Meanwhile, Figure 15 which represents the relation between input 3, input 2 and output too couldn’t strictly follow the experimental flow pattern. This is due to the nonlinearity of the system which leads to rule sharing in ANFIS as against rule sharing constraint discussed earlier. The R2 value of experimental and value predicted by Anfis were also found to be low as 0.7764.

Figure 13:- Validation by ANFIS showing the relationship between input 1 (confluence angle), input 2 (superficial velocity of water) and output (flow pattern).

ACS Paragon Plus Environment

Page 43 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Figure 14:- Validation by ANFIS showing the relationship between input 1 (confluence angle), input 3 (superficial velocity of dodecane) and output (flow pattern).

Figure 15:- Validation by ANFIS showing the relationship between input 2 (superficial

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 44 of 53

velocity of water), input 3 (superficial velocity of dodecane) and output (flow pattern). 3.5. System Identification 3.5.1. Discrete Time state space Model The current system has three inputs and one output. To accommodate minor variations in the data used to get the model, system identification tool introduces a noise term thus there are 3 known inputs to the model and one unknown noise input which can be taken as white noise signal. Hence it is a multiple input single output system. It is easy to represent such a system in state space model. A typical system in state space model in Innovations form can be written as in equation 19 and 20,

x(t + Ts ) = Ax(t) + Bu(t) + Ke(t)

(19)

y (t ) = Cx (t ) + Du (t ) + e (t )

(20)

Where x(t) is state vector, y(t) is output vector, u(t) is input vector, A is system matrix, B is input matrix, C is output matrix, D is feed forward matrix, K is disturbance matrix. The system identification results gives the order of the system as 4. And the parameters in the model are obtained as in equation 21,22,23,24,25.

x1 x1 A = x2 x3 x4

x3

x4

1.005 − 0.02755 − 0.2388 − 0.04431 0.0004396 0.9196 − 0.5671 − 0.4376 − 0.004508 − 0.1692 − 0.9476 0.3421 − 0.001237 u1

x1 B = x2 x3

x2

0.1218 u2

0.07554

0.4902

u3

− 0.0001934 − 1.513e − 6 − 6.729e − 7 − 0.003008 − 1.703e − 5 − 5.485e − 6 − 0.01286 − 0.0001413 − 2.413e − 5

x4

0.001828

4.279e − 5

(21)

(22)

2.753e − 6

Since B matrix coefficients have small values, we can understand that the direct effect of input to the states of the system is small.

C=

y1

x1 x2 x3 x4 1015 3.821 − 1.896 − 1.312

ACS Paragon Plus Environment

(23)

Page 45 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

As per the model generated, the impact of first state (x1) is larger than the other states (x2,x3,x4) even though all the state contribute significantly for the output y. D=

y1

u1 u 2 u 3 0 0 0

(24)

Since the coefficients of D matrix are zeros, one can understand that there is no direct or linear relation between input and output.

y1 x1 K = x2 x3

0.0009065 0.0003401 − 0.002299

x4

− 0.0002272

(25)

The sampling time of 1s was used for the study. Upon validation, ANFIS produced an MSE value of 1.21 with a 83.4% fit. The results show that the system is highly non-linear which agrees with the fact that outputs are patterns and not numerical functions. This also explains the reason behind the failure of ANFIS over other neural network techniques in flow pattern prediction.

3.5.2. Continuous Time State space Model Similarly, a continuous time state space model was also approximated by the system identification technique as in equation 26,27, dx = ax (t ) + bu (t ) + ke (t ) dt y (t ) = cx (t ) + du (t ) + e (t )

(26) (27)

Where x(t) is state vector, y(t) is output vector, u(t) is input vector, a is system matrix, b is input matrix, c is output matrix, d is feed forward matrix, k is disturbance matrix. The system identification results gives the order of the system as 4. And the parameters in the model are obtained as in equation 28,29,30,31,32.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

x1 x1 a = x2 x3

x2

x3

x4 − 0.03267 − 1.32 14.74

x4

x4

0.2613 2.833 − 0.6328 5.595 55.93 − 11.78 20.24 202.7 − 40.12

0.01066 0.1177 0.4221

x1 b = x2 x3

Page 46 of 53

(28)

2.312

u1

u2

u3

0.019 0.3759 1.358

0.0002144 0.004234 0.01521

3.509e − 5 0.0007008 0.002532

(29)

− 0.09709 − 0.001059 − 0.0001819

Since b matrix coefficients have comparatively smaller values, one can understand that the direct effect of input to the states of the system less. c=

y1

x1 x2 x3 x4 1010 − 121 .2 − 1365 306

(30)

The generated model shows that the impact of first state (x1) is larger than the other states (x2,x3,x4) even though all the state contribute significantly for the output y.

d=

y1

u1 u 2 0 0

u3 0

(31)

Since the d matrix coefficients are zeros, one can understand that there is no direct or linear relation between input and output.

y1 x1 k = x2 x3 x4

− 0.004162 − 0.06414 − 0.2291

(32)

0.01717

The Continuous Time State space Model produced a MSE of 1.32 with a fit of 81.2%

Since there are multiple inputs and single output, effect of each input can be analysed while neglecting the other inputs. Such subsystems when simplified, will give transfer function

ACS Paragon Plus Environment

Page 47 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

model, the stability of which can be easily studied using pole zero plots. A system is said to be stable if closed loop poles of the system fall on the left half of the s-plane. System identification technique gives pole zero plots of all input-output combinations, which has been represented in figures 16,17,18, 19. These figures shows that poles marked by ‘x’ is in the right half of the s-plane in all cases which shows that the system is unstable. The system identification results show that the system is highly non-linear and complex. This supports the fact that ANFIS pattern prediction does not give accurate predictions inspite of having R2 value of 0.7764 .The pattern predictor needs highly advanced neural network to learn its characteristics of the flow pattern. It is evident from the prediction results that neural network better fit than ANFIS. While, in electrical systems, ANFIS give very good predictions with low MSE values for 70-75% fit.

Figure 16:-Pole Zero Plot u1-y1

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Figure 17:-Pole Zero Plot u2-y1

Figure 18:-Pole Zero Plot u3-y1

Figure 19:-Pole Zero Plot Error-y1

ACS Paragon Plus Environment

Page 48 of 53

Page 49 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

4. Conclusion The flow pattern map for liquid-liquid system in a circular microchannel of 600µm was experimentally investigated for a varying Y junction confluence angle. The confluence angle was found to influence the flow pattern occurring in a microchannel. The bubble flow regime increased when the confluence angle was increased from 10 degree to 180 degree, while the deformed flow regime decreased. The experimental results showing distinguishing nature of transition boundaries were established using graphical interpretation. Studies have been carried out using important feed-forward back-propagation networks and radial basis networks such as ANN-PR, ANN-FF, CFN, PNN, GRNN and ANFIS. Among all predictors GRNN (Generalized Regression Neural Network) could give a better R2 value of 0.988 (table 4) proving itself as the best objective flow pattern indicator. The relative importance of input variables such as confluence angle, superficial velocity of dodecane and superficial velocity of water on flow pattern formation was found to be 19.16%, 43.22% and 37.62% respectively. Using system identification, discrete and continuous time state space models for the system was also established. Prediction Technique

Correlation Coefficient (R2)

Artificial Neural Network - Pattern Recognition (ANN-PR)

0.8383

Artificial Neural Network- Function Fitting (ANN-FF)

0.8864

Cascade Forward Network (CFN)

0.9534

Probablistic Neural Network (PNN)

0.9766

Generalized Regression Neural Network (GRNN)

0.988

Adaptive Neuro-Fuzzy Inference System (ANFIS)

0.7764

Table 4:- Various Prediction Techniques showing their corresponding R2 value

Acknowledgments The authors are grateful to the Editor and Reviewers for their valuable comments and suggestions which improved the quality of the manuscript. The financial support of National Institute of Technology Calicut, India (Faculty Research Grant Scheme, Grant No: Dean (C&SR)/FRG10-11/0102) is also gratefully acknowledged.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 50 of 53

References (1)

Han, Y.; Liu, Y.; Li, M.; Huang, J. A review of development of micro-channel heat exchanger applied in air-conditioning system. Energy Procedia 2012, 14, 148–153 DOI: 10.1016/j.egypro.2011.12.910.

(2)

Launay, S.; Sartre, V.; Lallemand, M. Experimental study on silicon micro-heat pipe arrays.

Appl.

Therm.

Eng.

24

2004,

(2-3),

233–243

DOI:

10.1016/j.applthermaleng.2003.08.003. (3)

Antony, R.; Nandagopal, M. S. G.; Manikrishna, C.; Selvaraju, N. Experimental comparison on efficiency of alkaline hydrolysis reaction in circular microreactors over conventional batch reactor. 2015, 74 (July), 390–394.

(4)

Picardo, J. R.; Pushpavanam, S. Low-Dimensional Modeling of Transport and Reactions in Two-Phase Stratified Flow. Ind. Eng. Chem. Res. 2015, 54 (42), 10481–10496 DOI: 10.1021/acs.iecr.5b01432.

(5)

Nandagopal, M. S. G.; Antony, R.; Selvaraju, N. Comparative study of liquid???liquid extraction in miniaturized channels over other conventional extraction methods. Microsyst. Technol. 2016, 22 (2), 349–356 DOI: 10.1007/s00542-014-2391-5.

(6)

Giri Nandagopal, M. S.; Antony, R.; Rangabhashiyam, S.; Sreekumar, N.; Selvaraju, N. Overview of microneedle system: A third generation transdermal drug delivery approach. Microsyst. Technol. 2014, 20 (7), 1249–1272 DOI: 10.1007/s00542-014-2233-5.

(7)

Antony, R.; Giri Nandagopal, M. S.; Sreekumar, N.; Selvaraju, N. Detection principles and development of microfluidic sensors in the last decade. Microsyst. Technol. 2014, 20 (6), 1051–1061 DOI: 10.1007/s00542-014-2165-0.

(8)

Sreenath, K.; Pushpavanam, S. Issues in the scaling of exothermic reactions: From microscale

to

macro-scale.

Chem.

Eng.

J.

2009,

155

(1-2),

312–319

DOI:

10.1016/j.cej.2009.06.035. (9)

Antony, R.; Giri Nandagopal, M. S.; Sreekumar, N.; Rangabhashiyam, S.; Selvaraju, N. Liquid-liquid slug flow in a microchannel reactor and its mass transfer properties - A

ACS Paragon Plus Environment

Page 51 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

review.

Bull.

Chem.

React.

Eng.

Catal.

2014,

9

(3),

207–223

DOI:

10.9767/bcrec.9.3.6977.207-223. (10)

Vir, A. B.; Fabiyan, A. S.; Picardo, J. R.; Pushpavanam, S. Performance Comparison of Liquid − Liquid Extraction in Parallel Micro fl ows. Ind. Eng. Chem. Res. 2014, 53, 8171– 8181 DOI: dx.doi.org/10.1021/ie4041803.

(11)

S, G. N. M.; Antony, R.; Rangabhashiyam, S.; Selvaraju, N.; Giri Nandagopal, M. S.; Antony, R.; Rangabhashiyam, S.; Selvaraju, N. Advance approach on environmental assessment and monitoring. Res. J. Chem. Environ. 2014, 18 (7), 7.

(12)

Burns, J. R.; Ramshaw, C. The intensification of rapid reactions in multiphase systems using slug flow in capillaries. Lab Chip 2001, 1 (1), 10–15 DOI: 10.1039/b102818a.

(13)

Kashid, M. N.; Harshe, Y. M.; Agar, D. W. Liquid-liquid slug flow in a capillary: An alternative to suspended drop or film contactors. Ind. Eng. Chem. Res. 2007, 46 (25), 8420–8430 DOI: 10.1021/ie070077x.

(14)

Cubaud, T.; Ho, C. M. Transport of bubbles in square microchannels. Phys. Fluids 2004, 16 (12), 4575–4585 DOI: 10.1063/1.1813871.

(15)

Kawahara, A.; Chung, P. Y.; Kawaji, M. Investigation of two-phase flow pattern, void fraction and pressure drop in a microchannel. Int. J. Multiph. Flow 2002, 28 (9), 1411– 1435 DOI: 10.1016/S0301-9322(02)00037-X.

(16)

Vir, A. B.; Kulkarni, S. R.; Picardo, J. R.; Sahu, A.; Pushpavanam, S. Holdup characteristics of two-phase parallel microflows. Microfluid. Nanofluidics 2014, 16 (6), 1057–1067 DOI: 10.1007/s10404-013-1269-7.

(17)

Stone, H. A.; Stroock, A. D.; Ajdari, A. ENGINEERING FLOWS IN SMALL DEVICESMicrofluidics Toward a Lab-on-a-Chip. Annu. Rev. Fluid Mech. 2004, 36 (1), 381–411 DOI: doi:10.1146/annurev.fluid.36.050802.122124.

(18)

Wang, G. R. Laser induced fluorescence photobleaching anemometer for microfluidic devices. Lab Chip 2005, 5 (4), 450–456 DOI: 10.1039/b416209a.

(19)

Cai, S.; Toral, H.; Qiu, J.; Archer, J. S. Neural network based objective flow regime

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 52 of 53

identification in air-water two phase flow. Can. J. Chem. Eng. 1994, 72 (3), 440–445 DOI: 10.1002/cjce.5450720308. (20)

Tsoukalas, L. H.; Ishii, M.; Mi, Y. A neurofuzzy methodology for impedance-based multiphase flow identification. Eng. Appl. Artif. Intell. 1997, 10 (6), 545–555 DOI: 10.1016/S0952-1976(97)00037-7.

(21)

Mi, Y.; Ishii, M.; Tsoukalas, L. H. Vertical two-phase flow identification using advanced instrumentation and neural networks. Nucl. Eng. Des. 1998, 184 (2-3), 409–420 DOI: 10.1016/S0029-5493(98)00212-X.

(22)

Gupta, S.; Liu, P. H.; Svoronos, S. A.; Sharma, R.; Abdel-Khalek, N. A.; Cheng, Y. H.; El-Shall, H. Hybrid first-principles neural networks model for column flotation. {AIChE} J. 1999, 45 (3), 557–566.

(23)

Yang, A. S.; Kuo, T. C.; Ling, P. H. Application of neural networks to prediction of phase transport characteristics in high-pressure two-phase turbulent bubbly flows. Nucl. Eng. Des. 2003, 223 (3), 295–313 DOI: 10.1016/S0029-5493(03)00060-8.

(24)

Malayeri, M.; Mullersteinhagen, H.; Smith, J. Neural network analysis of void fraction in air/water two-phase flows at elevated temperatures. Chem. Eng. Process. 2003, 42 (8-9), 587–597 DOI: 10.1016/S0255-2701(02)00208-8.

(25)

Xie, T.; Ghiaasiaan, S. M.; Karrila, S. Artificial neural network approach for flow regime classification in gas-liquid-fiber flows based on frequency domain analysis of pressure signals. Chem. Eng. Sci. 2004, 59 (11), 2241–2251 DOI: 10.1016/j.ces.2004.02.017.

(26)

H. Sharma, G. Das, and A. N. S. ANN–Based Prediction of Two-Phase Gas– Liquid Flow Patterns in a Circular Conduit. AIChE 2006, 52 (9), 3018–3028.

(27)

Antony, R.; Nandagopal, M. S. G.; Rangabhashiyam, S.; Selvaraju, N. Probabilistic Neural Network prediction of liquid- liquid two phase flows in a circular microchannel. J. Sci. Ind. Res. 2014, 73 (August), 525–529.

(28)

Timung, S.; Mandal, T. K. Prediction of flow pattern of gas–liquid flow through circular microchannel using probabilistic neural network. Appl. Soft Comput. 2013, 13 (4), 1674– 1685 DOI: 10.1016/j.asoc.2013.01.011.

ACS Paragon Plus Environment

Page 53 of 53

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(29)

Agatonovic-Kustrin, S.; Beresford, R. Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J. Pharm. Biomed. Anal. 2000, 22 (5), 717–727 DOI: 10.1016/S0731-7085(99)00272-1.

(30)

Bianucci, a M.; Micheli, a; Sperduti, a; Starita, a. Application of Cascade Correlation Networks for Structures to Chemistry. Appl. Intell. 2000, 12 (1-2), 117–146.

(31)

G. David Garson. A Comparison of Neural Network and Expert Systems Algorithms with Common Multivariate Procedures for Analysis of Social Science Data. Soc. Sci. Comput. Rev. 1991, 9 (3), 399–434.

(32)

Specht, D. F. Probabilistic neural networks. Neural Networks 1990, 3 (1), 109–118 DOI: 10.1016/0893-6080(90)90049-Q.

(33)

Specht, D. F. A general regression neural network. IEEE Trans. Neural Netw. 1991, 2 (6), 568–576 DOI: 10.1109/72.97934.

ACS Paragon Plus Environment