Stochastic Reconstruction of Complex Heavy Oil Molecules Using an

Oct 23, 2017 - A reasonable initial structural parameter set in the optimization space was determined using an artificial neural network. Then, the in...
1 downloads 12 Views 2MB Size
Subscriber access provided by UNIVERSITY OF LEEDS

Article

Stochastic Reconstruction of Complex Heavy Oil Molecules using an Artificial Neural Network Celal Utku Deniz, Muzaffer Yasar, and Michael T Klein Energy Fuels, Just Accepted Manuscript • DOI: 10.1021/acs.energyfuels.7b02311 • Publication Date (Web): 23 Oct 2017 Downloaded from http://pubs.acs.org on October 24, 2017

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

Energy & Fuels is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Stochastic Reconstruction of Complex Heavy Oil Molecules using an Artificial Neural Network Celal Utku Deniz and Muzaffer Yasar* Chemical Engineering Department Istanbul University Avcilar, Istanbul 34820 and Michael T. Klein Chemical and Biomolecular Engineering Department University of Delaware Newark, Delaware 19716 and Center for Refining and Petrochemicals King Fahd University of Petroleum and Minerals Dhahran, Saudi Arabia Abstract An approach for the stochastic reconstruction of petroleum fractions based on the joint use of artificial neural networks and genetic algorithms was developed. This hybrid approach reduced the time required for optimization of the composition of the petroleum fraction without sacrificing accuracy. A reasonable initial structural parameter set in the optimization space was determined using an artificial neural network. Then, the initial parameter set was optimized using a genetic algorithm. The simulations show that the time savings were between 62 and 74 percent for the samples used. This development is critical, considering that the characteristic time required for the optimization procedure is hours or even days for stochastic reconstruction. In addition, the stand-alone use of the artificial neural network step that produces instantaneous results may help where it is necessary to make quick decisions. 1. Introduction The modeling of petroleum fractions is a subject that has been explored in different ways by various researchers.1–4 The first studies in the literature are based on the lumps according to the general properties of fractions, such as boiling point.5–8 In this approach, the structural properties of the compounds contained within the lump are not identified. In fact, lumped models do not contain any information other than the single attribute that identifies the lump. These models, which do not contain any information about the structures of the molecules in the lumps, are only applicable to the examples they are designed for and are not generalizable.1

*

Corresponding author: [email protected]

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Further work on kinetic modeling of complex mixtures has revealed the need to consider the chemical structure. With increasing computing power of computers, it becomes possible to model complex mixtures in much more detail at the molecular level. In this context, Jaffe9 introduced the concept of "bond kinetics" to describe the reactions of specific chemical bonds. Then, the concept of "group kinetic", based on the use of specific structural analog groups, was suggested by Gray10 for kinetic modeling of complex mixtures. Quann and Jaffe3 developed the powerful structure-oriented lumping (SOL) and extended SOL models4, where mixtures are represented using analytically determined structural groups. In this approach, molecules are expressed by vectors. The Klein Research Group (KRG) has developed an atomic-level model built on the idea of the Probability Density Function (PDF)1,11,12, which assembles atomically explicit molecular building blocks in the stochastic reconstruction (SR) of complex hydrocarbon mixtures. Especially, since the gamma distribution function can represent exponential, chi-square and normal distribution functions under special conditions, it is very useful in representing the structural properties. The SR method has been successfully applied not only for light fractions13 but also for modeling heavy fractions such as residues14 and asphaltenes15. The SR model comprises the following key steps. First, the PDF’s are created to express the quantitative probability of the existence of a molecular building block, e. g., the number of aromatic and naphthenic rings and the numbers and lengths of the chains linked to these rings. Sampling the values of attributes from these PDF’s allows the construction of the stochastic molecules. Once a reasonable number of stochastic molecules has been generated, the calculated properties of this set of molecules are compared with the analytical data. The integrated optimization process is continued to the target parameters until the difference between the analytical data and the properties of the stochastic molecule set is acceptable. Although the classical SR method is a very successful approach at the molecular level, the computational time required to optimize the set of structural parameters can be very high, which, in turn, limits the practical use of this approach. In order to reduce the computational burden, Oliveira et al.14 developed a method called Reconstruction by Entropy Maximization (REM). They have represented the vacuum residue fractions with a predefined set of molecules and modified the molar fractions of these molecules to minimize the objective function. In this way, the actual mixture can be mimicked in seconds instead of days. The REM approach effectively eliminates the structural parameter optimization step by altering the relative abundance of the predefined molecules in mixture instead of the structural parameter set. However, it is unclear whether the set of molecules formed by adjusting the relative abundance of the predefined molecules obey to the Boduszynski continuum model.16–18 In this study, two different algorithms have been developed to determine the structural parameter values by using the learning ability of artificial neural networks. This work as aimed at the reduction of the time needed to optimize the structural parameters and to facilitate the practical application of the SR method. The algorithms were evaluated by using different asphaltene samples and the results were compared with the results acquired by the genetic algorithm driven SR method. A hybrid approach, which combines artificial neural network (ANN) and genetic

ACS Paragon Plus Environment

Page 2 of 20

Page 3 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

algorithm (GA) techniques, was also employed in order to decrease optimization time without sacrificing the accuracy.

2. Stochastic Reconstruction Optimization Time There are many studies in literature that focuses on reconstruction of oil fractions.11,12,14,19,20 The models proposed in these studies are based on the defined structural parameters. The values of these parameters are optimized using the experimental data of the target mixture. The main goal is to find structural parameter values that best reflect the experimental properties of the oil fraction. To achieve this goal, the genetic algorithm is often used. The genetic algorithm optimizes the structural parameters by starting from a randomly selected initial population. The individuals of initial generation that provide the best fit with the experimental data are passed on to the next generation with crossing, mutation and elitism operators. This loop continues until the structural parameter values that best represent the complex mixtures are determined. The above-mentioned process requires a very high processing power and therefore a processing time. The time needed to optimize structural parameters depends on the number of molecules that used to express the complex mixture. As an example, optimization of the structural parameters may take one day to one week, when 20,000 stochastic molecules are used to represent the mixture14,19. The CPU time also varies depending on the model used and the fraction to be represented. This high computation time can limit the practical use of molecularlevel models. In order to solve this problem, artificial neural networks have been used to determine the structural parameters in a much shorter time. Artificial Neural Networks. Artificial neural network theory developed as a mathematical technique by McCulloch and Pitts.21 ANNs are data processing tools with machine learning capabilities that are inspired by biological neural networks. These networks can be trained to perform specific tasks by adjusting network parameters. The main advantages of neural networks are nonlinear modeling and generalization. In recent years, ANN has been widely used for various applications such as process modeling22–24 and control25. A typical neural network consists of interconnected nodes that function as neurons. The output of the neuron in one layer is connected to all the neurons in the next layer with a certain weights. Connection between layers is done by transfer functions. Hyperbolic tangent sigmoid and linear functions are the most commonly used transfer functions.26 The input layers of the networks are connected to the hidden layers by the hyperbolic tangent transfer function. The symmetric saturating linear transfer function is used to link the hidden layer to the output layer. The reason behind the using this transfer function is that some of the output parameters are defined within certain limits. Degree of substitution from an aromatic and a naphthenic ring, benzylic methyl and ring compactness parameters are represent a ratio and are defined with real numbers between 0 and 1 interval. Similarly, methyl branches on chains and

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 4 of 20

naphthenic ring neighborhood parameters are also structural parameters that have varying values in the same range.15,19 The learning process is related to the determination of the weights and biases. A multilayer learning algorithm can be explained as follows: (a) The weights of first hidden layer (IW) are initialized with initial biases (b1) for the first transfer function and weights of output layer (LW) with initial biases (b2) are for the second transfer function. (b) The inputs (given in Table 1 are entered the hidden layer. The network output of interest that labeled as y calculated by using the equation    . .  . .             1 

(1)

(c) The weights and biases in the network are updated according to the Levenberg27−Marquardt28 algorithm in each epoch. The network calculates the outputs based on inputs and weights, depending on the network structure. These weights are adjusted to match known targets with accuracy at the desired level, based on a comparison between network outputs and known targets. The process of updating the weights and biases continues until the desired targets are achieved. Algorithm uses input and output values from the training data set to update the weights of connections to reduce the difference between the predicted and expected values of structural parameters.

Figure 1. Architecture of the artificial neural network The architecture of the artificial neural network used in this study is given in Figure 1. NIM and NOP represent the number of input measurements and number of output parameters, respectively. Here p is an NIM-length input vector. Number of neurons expressed as NN. Hidden layer and output layer includes the weight matrices IW and LW, respectively. Each layer has a bias vector b and an output vector a. Output of the second layer is the network output, and this output is labeled as y. Input measurements and output parameters (IM and OP) that used in study given in Table 1.

ACS Paragon Plus Environment

Page 5 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Table 1. Input measurements and output parameters Input Measurements Average MW Carbon Hydrogen Sulfur Nitrogen Oxygen Hydrogen Type 1 (Aromatic) Hydrogen Type 2 (α-CH, α-CH2) Hydrogen Type 3 (α-CH3) Hydrogen Type 4 (β-CH2, CH/CH2 Naph.) Hydrogen Type 5 (β-CH3, β+-CH2) Hydrogen Type 6 (γ-CH3)

g/mol wt % wt % wt % wt % wt % % % % % % %

Output Parameters Number of Unit Sheets Number of Aromatic Rings Number of Naphthenic Ring Degree of Substitution off Aromatic Benzylic methyl ratio Degree of Substitution off Naphthenic Side chain length Methyl branches on chains Ring Compactness Naphthenic ring neighborhood Sulfur Nitrogen Oxygen

3. Synthetic data generation approach The artificial neural network needs to be trained with independent input and dependent output values. In this paper, input variables are the experimental data of the mixture and output variables are the structural parameters representing the mixture. However, it is not feasible to provide sufficient experimental data and corresponding structural parameter set to train the artificial neural network, both practical and economically. For this reason, inputs and outputs are obtained using two different paths. These paths are essentially comprise two steps. The first synthetically producing structural parameter sets, while the second calculation of the corresponding mixture properties. These two paths differ from each other in the first step. Path A uses randomly generated structural parameter sets while Path B uses structural parameter sets generated during genetic optimization algorithm. Some further details on the generation of synthetic structural parameter sets are given below. The second step is common to both paths. For each parameter set, the corresponding mixture was generated using the SR algorithm given in Figure 2. Stochastic reconstruction includes two preliminary steps: (i) the expression of the structural parameters by distribution functions and (ii) stochastic sampling from distributions. After preliminary steps, SR algorithm calls the molecule generation subroutine given in Figure 3. Molecule generation subroutine builds the aromatic core, attaches the naphthenic and 5-membered heteroaromatic rings to the aromatic core, then attaches the alkyl side chains and places the remaining heteroatoms to the structure. Further details of molecule generation subroutine is given elsewhere.19 In this study, 105 stochastic molecules were used to represent each synthetic mixture. When the stochastic molecule set is generated, the SR algorithm computes its properties.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Figure 2. Generalized SR algorithm

ACS Paragon Plus Environment

Page 6 of 20

Page 7 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Figure 3. Simplified representation of molecule generation steps. Path A. The first path used to obtain the input and output data is based on the direct use of the SR algorithm. The algorithm was run repeatedly using randomly generated sets of structural parameters and the resulting data (average molecular weight, element percentages, proton types, and percentages) were recorded.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Figure 4. Path A for synthetic data generation Path A begins with the generation of individual structural parameters. Values of each structural parameter are randomly sampled from its distribution.15 The lower and upper limits of the distribution is depend on the structural parameter. Then, the structural parameters are combined to form the structural parameter sets. Once the structural parameter sets are obtained, these sets are used in the SR algorithm. A sufficient number of stochastic molecules are generated for each set of structural parameters. Thus, molecule sets (mixture) corresponding to each set of structural parameters are formed. The properties of the mixtures generated by each different parameter sets are calculated. At the end of this process, M properties sets corresponding to M structural parameter sets are collected. Path A based on random structural parameter sets is given in Figure 4. Path B. The second path for artificial neural network training is based on the Genetic Algorithm (GA). The details of Path B are given in Figure 5. Path B differs from the Path A in terms of

ACS Paragon Plus Environment

Page 8 of 20

Page 9 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

determining the input and output values that are used in artificial neural network training. Whereas in Path A the elements (the average molecular weight, weight percentages of carbon, hydrogen, sulfur, nitrogen, oxygen and the distributions of hydrogen types) of the output variable sets used in the artificial neural network training were determined by randomly generated structural parameter sets, in Path B

Figure 5. Path B for synthetic data generation the structural parameter set of a sample is optimized with the GA,15 using the experimental data of one of the asphaltene samples. The GA assigns the randomly sampled initial values of the structural parameters. Gamma distributions are then generated for each structural parameter. The generated distributions (structural parameter sets) are fed into the SR algorithm to calculate the

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 10 of 20

bulk properties (average molecular weight, carbon, hydrogen, sulfur, nitrogen and oxygen distributions and hydrogen species distributions) corresponding to each structural parameter set. Then, the objective function values are calculated for all initial structural parameter sets by comparing calculated properties with the experimental data. Whether the optimization condition is met is then tested; if the conditions are not met, new structural parameter sets are generated with crossover, mutation and elitism approaches. The loop continues until the optimization conditions are met. In the optimization process, each set of structural parameters produced by crossover, mutation and elitism approaches and corresponding bulk properties are simultaneously collected.

4. Results and Discussion Structural parameter sets and properties of corresponding mixtures are collected using two different path. In Path A, structural parameter sets are randomly sampled from normal distributions that have lower and upper limits for each parameter. Then, stochastic molecules are generated based on these structural parameters. In Path B, structural parameters and corresponding mixture properties are collected from GA optimization process. For this purpose, each of the asphaltene samples was individually subjected to the optimization process. The learning performances of neural networks that were trained on the data gathered by two different synthetic data generation paths were evaluated. The prediction performances are also evaluated and compared by analytical data (including 1H-NMR, elemental analysis and average MW by GPC) from our previous studies29–32. The effect of number of hidden neurons on network learning and prediction performances. The performance of the artificial neural networks was evaluated by two different criteria according to the number of neurons. The first criteria is the learning performance, which is based on the mean squared error (MSE) value of the validation data during the training process. The second criteria is the objective function value (OFV), which is based on the comparison of analytical properties with the reconstructed mixture properties. The structural parameters are predicted by using the trained network and corresponding mixture reconstructed using the SR algorithm. Then the properties of the reconstructed mixture are determined. OFV is calculated using the following formula  

#)/

!" $0

#$ %&'. − #$)*+,. #$

%&'.

-



where PV represents the parameter value (input measurements in Table 1).

ACS Paragon Plus Environment

(2)

Page 11 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Figure 6. Comparison of learning performances based on paths and number of neurons. In order to evaluate the learning performance of the networks, independent input and dependent output values were generated using two different approaches. Each network was trained 100 times from different initial points and averages of MSE values were taken. Learning performances based on synthetic data generation approaches and number of neurons are given in Figure 6. The learning performance of Path A starts at a high error value (around 1.2) and remains constant (about 0.6) after 25 neurons. The learning performance of Path B is higher than Path A. The learning performance of Path B depends mainly on the sample of the asphaltene used in the process of obtaining the synthetic data (Figure 6). The learning performance value of Path B starts around 0.40 – 0.25 and rapidly drops to approximately 0.20 after 20 neurons.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 12 of 20

Figure 7. Comparison of prediction performances based on paths and number of neurons. The prediction performances of the both paths were also evaluated. As in the evaluation of the learning performance, each network trained 100 times from different initial points and the OFVs were averaged. As seen in Figure 7a, OFV shows a rapid decline to 10 neurons for Path A. After this value, the increase in the number of neurons does not cause a significant change in the OFV. Similarly, there is a rapid decrease in the OFV for Path B up to approximately 8 neurons. The decrease in the OFV lasts up to 20 neurons and does not show a significant decrease after this value as seen in the Figure 7b. The objective function value obtained by Path A is greater than 1 for all samples, and for Path B this value falls to 0.25 depending on the sample.

Figure 8. Measured and reconstructed elemental composition based on used path for Garzan asphaltene. The measured and reconstructed properties were also evaluated in detail based on the path used in network training. Path-based element compositions in comparison with the measured data for

ACS Paragon Plus Environment

Page 13 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Garzan asphaltene were given in Figure 8. Similar results were observed in terms of carbon and hydrogen percentages. However, it appears that the percentages of heteroatoms were significantly different. Sulfur, nitrogen and oxygen fractions were determined with errors of 8%, 11% and 35% by Path A, and errors of 2%, 11% and 5% using Path B, respectively. Path B is more successful in imitating the percentages of heteroatoms compared to Path A. The sets of structural parameters generated by the genetic algorithm were more useful in the training process.

Figure 9. Measured and reconstructed hydrogen type composition based on used path for Garzan asphaltene.

Similarly, a comparison between the experimental and calculated hydrogen type composition is given in Figure 9. When the methods used to generate the training data were compared, both of the paths show similar results for Aromatic, α-CH + α-CH2, β-CH2 + Naphthenic and β-CH3 + β+-CH2 hydrogen types. When Path A was used, these hydrogen species were determined by errors of 2%, 4%, 6% and 3%, while Path B showed errors of 4%, 3%, 2% and 7%. On the other hand, these methods were separated from each other at the point of determination of the percentages of the hydrogen types of α-CH3 and γ-CH3. Path A calculated aforementioned hydrogen types with errors of 37% and 24%, while these types of hydrogens calculated using Path B with errors of 1% and 4%, respectively. Moreover, the experimental MW of Garzan asphaltene is 812 g/mol32. ANN trained by Path A and Path B calculated this value as 830 and 1022 g /mol, respectively. It is seen from the simulation results that artificial neural network trained with Path B produces more reasonable results than that of trained by Path A.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

A hybrid approach to speed up the structural parameter optimization procedure. Although the use of the artificial neural network is an instant method for determining the structural parameters, it should be noted that the error margin is high for both paths. As previously mentioned, the performance of the trained neural networks were evaluated based on the average value of different starting points instead of a single starting point. The objective function values obtained using the network with the lowest error are given in Table 2. GA driven SR results from the literature19 and hybrid approach results are also added to table for comparison. Although the achieved OFVs with Path A are quite high, the results of Path B are promising.

Table 2. The lowest OFVs obtained with different paths. Sample

OFV Path A Path B GA driven Hybrid SR19 Approach Adiyaman 56.4 * 8.5 * Bati Raman 52.0 48.0 16.1 18.4 Besikli 63.1 19.8 12.2 12.6 Celikli 63.6 12.9 8.1 7.4 Garzan 64.2 16.4 7.4 7.7 Yenikoy 84.4 21.0 6.6 6.5 *Adiyaman sample used for gathering training data.

The genetic algorithm was combined with the artificial neural network to reduce the optimization time required for the optimization of the structural parameters without sacrificing the accuracy.15 The algorithm developed for this purpose is given in Figure 10. The initial set of structural parameters was determined by the ANN (trained with Path B), and then the parameter set was

ACS Paragon Plus Environment

Page 14 of 20

Page 15 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

optimized by the genetic algorithm. When only the genetic algorithm is used for stochastic reconstruction, it is reported by different sources14,19 that about 100 iterations (generation) are required for the parameter optimization procedure.

Table 3. Comparison of percentage errors of each PV, OFV and CPU time according to different approaches. GA driven SR exptl.

sim.

19

% error

ANN Path A sim.

ANN Path B

% error

sim.

Hybrid Approach

% error

sim.

% error

0.64

950

0.85

average molecular weight g/mol

942

952

1.01

1106

17.40

948

elemental analysis

carbon

wt %

84.43 84.09

0.41

82.57

2.20

84.02

0.49

84.19

0.29

hydrogen

wt %

8.41

8.80

4.67

8.40

0.16

8.61

2.34

8.64

2.74

sulfur

wt %

2.41

2.39

0.75

2.89

20.08

2.68

11.26

2.39

0.88

nitrogen

wt %

2.75

2.71

1.36

3.26

18.47

2.73

0.72

2.79

1.31

oxygen

wt %

2.00

1.99

0.74

2.86

42.77

1.94

3.02

1.97

1.29

1

H NMR

Aromatic

%

8.64

8.73

1.01

9.66

11.76

8.78

1.67

8.36

3.29

α-CH, α-CH2

%

6.90

6.93

0.37

7.40

7.25

6.83

1.02

6.86

0.51

α-CH3

%

6.09

6.21

1.90

4.37

28.17

6.18

1.46

5.81

4.60

β-CH2, CH/CH2 Naph.

%

27.81 28.75

3.36

28.73

3.30

27.89

0.30

27.50

1.13

β-CH3, β -CH2

%

32.05 31.79

0.82

29.40

8.27

32.54

1.54

32.96

2.83

γ-CH3

%

18.51 17.61

4.86

20.44

10.44

17.77

3.98

18.32

1.05

+

Objective Function Value CPU Time (min)

8.14 ~660

63.67 ~2

12.92 ~2

7.42 ~170

In order to compare the experimental and simulated properties, objective function values and CPU times of different approaches a comprehensive table is given (Table 3) for Celikli asphaltene. As seen in Table 3, there is a good agreement between the experimental and the simulated properties except the ANN Path A, based on the data simulated at this phase of the study. It is seen that the ANN Path B is more successful in imitating the experimental data compared to network trained by using Path A. When evaluated in terms of PV, it can be said that

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 16 of 20

completing Path B with GA (Hybrid Approach) provides a significant improvement. With the classic approach (GA driven SR), the optimization process took about 660 minutes, while Hybrid approach reduced it to 170 minutes. It is also possible to simulate mixtures in 2 minutes using directly ANN Path B, but in this case the error rates are increased. Using the hybrid approach, mixtures can be represented with acceptable error rates in a shorter time.

Figure 10. Genetic optimization algorithm combined with artificial neural network

As shown in Table 2, The OFVs obtained by using the combined algorithm (ANN followed with genetic algorithm) were 18.4, 12.6, 7.4, 7.7 and 6.5 after 38, 36, 26, 33 and 27 iterations for Batiraman, Besikli, Celikli, Garzan and Yenikoy samples, respectively. The hybrid approach,

ACS Paragon Plus Environment

Page 17 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

which combines ANN and GA approaches, has a satisfactory performance in terms of OFV and required time for optimization.

5. Conclusion A key feature of the stochastic reconstruction method for representing the molecular composition of petroleum fractions, namely, the CPU time required to optimize the structural parameters, was addressed using the learning ability of artificial neural networks. Two different approaches were used to train the artificial neural network. The results obtained with the first approach based on randomly generated parameter sets had unacceptably high error. The second method, based on the parameter sets generated in the genetic algorithm process, was found to have lower error compared the first approach. By using a hybrid method based on the joint use of genetic algorithm and artificial neural network, both satisfactory results were obtained and significant reduction in the time required for optimization was recorded. With this approach, the artificial neural network determines a reasonable starting point in the large optimization space. Then, genetic algorithm iterates the starting point to achieve an acceptable OFV. Thus, the computationally costly iteration step is repeated less frequently and acceptable results are obtained in a shorter time. For the samples used, it is observed that the time saving is in the range of 62-74%. Considering that the time required for stochastic reconstruction is measured in hours or even days, this improvement is critical for molecular level modeling of petroleum fractions.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ABBREVIATIONS ANN = Artificial Neural Network CPU = Central Processing Unit GA = Genetic Algorithm GPC = Gel Permeation Chromatography IM = Input Measurement KRG = Klein Research Group MSE = Mean Squared Error MW = Molecular Weight NIM = Number of Input Measurements NMR = Nuclear Magnetic Resonance NOP = Number of Output Parameters OFV = Objective Function Value OP = Output Parameter PDF = Probability Density Function PV = Parameter Value REM = Reconstruction by Entropy Maximization SOL = Structure Oriented Lumping SR = Stochastic Reconstruction

ACKNOWLEDGMENTS This work was supported in part by the Research Fund of University of Istanbul, Project Number: 41216. Celal Utku Deniz would like to thank The Scientific and Technological

ACS Paragon Plus Environment

Page 18 of 20

Page 19 of 20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Energy & Fuels

Research Council of Turkey (TÜBĐTAK) for research grant 2214A/2014. Michael T. Klein acknowledges collaborations with and support of colleagues via the Saudi Aramco Chair Program at KFUPM and Saudi Aramco.

REFERENCES (1)

Klein, M. T.; Bertolacini, R. J.; Broadbelt, L. J.; Group, F. Molecular Modeling in Heavy Hydrocarbon Conversions; Klein, M. T., Hou, G., Bertolacini, R. J., Broadbelt, L. J., Kumar, A., Eds.; Taylor & Francis, 2006.

(2)

Elizalde, I.; Ancheyta, J. Fuel 2011, 90 (12), 3542–3550.

(3)

Quann, R. J.; Jaffe, S. B. Ind. Eng. Chem. Res. 1992, 31 (11), 2483–2497.

(4)

Jaffe, S. B.; Freund, H.; Olmstead, W. N. Ind. Eng. Chem. Res. 2005, 44, 9840–9852.

(5)

Weekman, V. W. Lumps, models, and kinetics in practice; American Institute of chemical engineers: New York, 1979.

(6)

Coxson, P.; Bischoff, K. Ind. Eng. Chem. Res. 1987, 26, 1239–1248.

(7)

Ancheyta-Juárez, J.; López-Isunza, F.; Aguilar-Rodrı́guez, E. Appl. Catal. A Gen. 1999, 177 (2), 227–235.

(8)

Meng, X.; Xu, C.; Gao, J.; Li, L. Appl. Catal. A Gen. 2006, 301 (1), 32–38.

(9)

Jaffe, S. B. Ind. Eng. Chem. Process Des. Dev. 1974, 13 (1), 34–39.

(10)

Gray, M. R. Ind. Eng. Chem. Res. 1990, 29 (4), 505–512.

(11)

Trauth, D. M.; Stark, S. M.; Petti, T. F.; Neurock, M.; Klein, M. T. Energy Fuels 1994, 8 (l), 576–580.

(12)

Neurock, M. A computational chemical reaction engineering analysis of complex heavy hydrocarbon reaction systems, University of Delaware, 1992.

(13)

Hou, Z. Software tools for molecule-based kinetic modeling of complex systems, The State University of New Jersey, 2011.

(14)

De Oliveira, L. P.; Vazquez, A. T.; Verstraete, J. J.; Kolb, M. Energy Fuels 2013, 27 (7), 3622–3641.

(15)

Deniz, C. U. Artificial neural network modeling of the heavy petroleum fractions at molecular level, Istanbul University, 2017.

(16)

Boduszynski, M. M. Energy Fuels 1988, 2 (5), 2–11.

(17)

McKenna, A. M.; Purcell, J. M.; Rodgers, R. P.; Marshall, A. G. Energy Fuels 2010, 24 (5), 2929–2938.

(18)

McKenna, A. M.; Blakney, G. T.; Xian, F.; Glaser, P. B.; Rodgers, R. P.; Marshall, A. G. Energy Fuels 2010, 24 (5), 2939–2946.

ACS Paragon Plus Environment

Energy & Fuels

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

(19)

Deniz, C. U.; Yasar, M.; Klein, M. T. Energy and Fuels 2017, 31 (8), 7919–7931.

(20)

Petti, T. F.; Trauth, D. M.; Stark, S. M.; Neurock, M.; Yasar, M.; Klein, M. T. Energy Fuels 1994, 8 (3), 570–575.

(21)

McCulloch, W.; Pitts, W. Bull. Math. Biophys. 1943, 5 (4), 115–133.

(22)

Sedighi, M.; Keyvanloo, K.; Towfighi, J. Ind. Eng. Chem. Res. 2011, 50 (3), 1536–1547.

(23)

Elkamel, A.; Al-Ajmi, A.; Fahim, M. Pet. Sci. Technol. 1999, 17 (9–10), 931–954.

(24)

Tian, L.; Wang, J.; Shen, B.; Liu, J. Energy Fuels 2010, 24 (8), 4380–4386.

(25)

Zhang, Q.; Stanley, S. J. J. Environ. Eng. 1999, 125, 153–160.

(26)

Jorjani, E.; Chehreh Chelgani, S.; Mesroghli, S. Fuel 2008, 87 (12), 2727–2734.

(27)

Levenberg, K. Q. Appl. Math. 1944, 2 (2), 164–168.

(28)

Marquardt, D. W. J. Appl. Math. 1963, 11 (2), 431–441.

(29)

Akmaz, S. Investigation of Molecular Structure and Reaction Kinetics of Asphaltenes, Istanbul University, 2006.

(30)

Akmaz, S.; Iscan, O.; Gurkaynak, M. A.; Yasar, M. Pet. Sci. Technol. 2011, 29 (2), 160– 171.

(31)

Yasar, M.; Akmaz, S.; Gurkaynak, M. A. Pet. Sci. Technol. 2009, 27 (10), 1044–1061.

(32)

Yasar, M.; Akmaz, S.; Gurkaynak, M. A. Fuel 2007, 86 (12–13), 1737–1748.

ACS Paragon Plus Environment

Page 20 of 20