Prediction of Condensate-to-Gas Ratio for Retrograde Gas

Apr 22, 2012 - Prediction of Condensate-to-Gas Ratio for Retrograde Gas Condensate Reservoirs Using Artificial Neural Network with Particle Swarm Opti...
3 downloads 5 Views 3MB Size
Article pubs.acs.org/EF

Prediction of Condensate-to-Gas Ratio for Retrograde Gas Condensate Reservoirs Using Artificial Neural Network with Particle Swarm Optimization Sohrab Zendehboudi,*,† Mohammad Ali Ahmadi,‡ Lesley James,§ and Ioannis Chatzis† †

Chemical Engineering Department, University of Waterloo, Waterloo, ON, Canada, N2L 3G1 Faculty of Petroleum Engineering, Petroleum University of Technology, Ahwaz, Khuzestan, Iran § Faculty of Engineering and Applied Science, Memorial University of Newfoundland, St. John’s, NL, Canada, A1B 3X5 ‡

ABSTRACT: Condensate-to-gas ratio (CGR) plays an important role in sales potential assessment of both gas and liquid, design of required surface processing facilities, reservoir characterization, and modeling of gas condensate reservoirs. Field work and laboratory determination of CGR is both time consuming and resource intensive. Developing a rapid and inexpensive technique to accurately estimate CGR is of great interest. An intelligent model is proposed in this paper based on a feedforward artificial neural network (ANN) optimized by particle swarm optimization (PSO) technique. The PSO-ANN model was evaluated using experimental data and some PVT data available in the literature. The model predictions were compared with field data, experimental data, and the CGR obtained from an empirical correlation. A good agreement was observed between the predicted CGR values and the experimental and field data. Results of this study indicate that mixture molecular weight among input parameters selected for PSO-ANN has the greatest impact on CGR value, and the PSO-ANN is superior over conventional neural networks and empirical correlations. The developed model has the ability to predict the CGR with high precision in a wide range of thermodynamic conditions. The proposed model can serve as a reliable tool for quick and inexpensive but effective assessment of CGR in the absence of adequate experimental or field data.

1. INTRODUCTION Gas condensates are highly valued hydrocarbon resources. Gas condensate reservoirs are characterized by their complicated flow regimes and intricate thermodynamic behaviors.1−3 Multiphase flow of both gas and liquid condensates is expected during the production of a gas condensate reservoir with dynamic phase behavior from the porous rock to the surface facilities. The Arun field in Indonesia, South Pars field in Iran, and Greentown field in the USA with condensate-to-gas ratio (CGR) ranging from 10 to 300 STB/MMSCF are very large gas condensate reservoirs in the world.1−4 Condensate production due to pressure drop in the reservoir improves project economics through condensate sales as produced liquid condensate is the only potential source of revenue in the severe case of a poor gas market.3−5 Gas condensate reservoirs differ from conventional gas reservoirs in their thermodynamic and flow behaviors. Prediction and optimization of hydrocarbon recovery in this type of reservoirs require careful reservoir analysis, development plan, and project management.1−4 Isothermal condensation of liquids lowers the gas deliverability in the reservoir due to condensate blockage.4,6,7 Low gas relative permeability in the gas−liquid region near the production well is the main reason behind reduction of gas flow rate.4,6−8 Proper production strategy for gas condensate reservoirs can minimize or even avoid condensate blockage, however the prediction of key parameters like CGR is necessary. High accuracy in prediction methods is needed in order to provide an overall acceptable estimate of the performance of gas condensate reserves before the implementation of a proper gas recovery plan. Numerical and analytical modeling, PVT studies, and characterizations of retrograde gas condensate reservoirs © 2012 American Chemical Society

remain challenging to the reservoir engineers as a huge amount of time and energy is required to carry out such studies mentioned above, efficiently.4−6,9,10 The phase behavior of a gas condensate reservoir is strongly dependent on the P-T envelope and thermodynamic conditions of the hydrocarbon mixture. Proper thermodynamic knowledge of phase behavior in this type of reservoirs is essential in order to predict the recovery factor and CGR. Several approaches are available in the literature for performance analysis of retrograde gas condensate reservoirs.7−15 Thomas et al. (2009) proposed a systematic strategy for performance prediction of gas condensate reservoirs based on relative permeability and two-phase dynamic measurements.11 Although measuring nonlinear parameters such as CGR and dewpoint pressure is viable through some tests (e.g., Constant Composition Expansion (CCE) and Constant Volume Depletion (CVD)), however, the laboratory and field tests are quite costly and people and time intensive.16,17 As quick, inexpensive and efficient alternatives, other techniques that include theoretical or/and laboratory tests combined with artificial intelligent systems were suggested by a number of researchers.18−21 Several models have been reported in the literature to predict flow and thermodynamic behaviors of gas condensate reservoirs.12,13,15,22−26 Clearly, prediction of production rate is central in design of wellhead equipment and surface facilities. Alavi et al. (2010) used a modified Peng-Robison equation of state in Received: January 4, 2012 Revised: April 18, 2012 Published: April 22, 2012 3432

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

their model combined with some experiments to determine physical properties of gas condensate mixtures. Their main goal was developing an appropriate design to optimize the amount of the produced condensate in a gas reservoir when pressure decreases below the dewpoint.15 Jokhio et al. (2002) employed the Whitson and Fevang’s model to estimate the extent of produced liquid condensate.24,25 Al-Farhan et al. (2006) examined the changes in CGR at various process conditions with considering a three-stage separation process optimized using a four-layer back propagation (BP) artificial neural network (ANN).26 A number of evolutionary algorithms such as genetic algorithm (GA), pruning algorithm, and back propagation were also used to determine the network structure and its associated parameters (e.g., connection weights).26−32 England (2002) developed a global polynomial correlation for CGR in terms of P and T based on a PVT study under specific thermodynamic and geological conditions.33 Accurate determination of PVT properties such as CGR is necessary to predict and more accurately history match production rates and recovery factors. Many correlations exist in the literature based on PVT data for various types of hydrocarbon mixtures. Most of these correlations were obtained with linear or nonlinear multivariable regression or graphical techniques where some are not very accurate.33−38 Artificial neural network (ANN) systems are potentially reliable tools for the prediction of PVT parameters such as CGR in gas condensate reservoirs if the networks are effectively trained. The enormous interconnections in the ANN offer a wider degree of freedom or a large number of fitting variables. Therefore, ANN is better able to describe the nonlinearity of the process compared to classical regression methods. Another advantage of artificial neural networks is that they are dynamically adaptive where they are able to learn and adjust to new conditions in which the efficiency of ANN is not adequate with old situations. In addition, the ANN model is capable of handling systems with multiple inputs and outputs.39,40 This paper highlights the economic significance, production characteristics, production methods, and P-T behavior of gas condensate reservoirs with a brief description of common techniques for determining the CGR. An intelligent model is developed, based on a feed-forward artificial neural network (ANN) optimized by using particle swarm optimization (PSO), and examined to predict CGR in terms of dewpoint pressure, temperature and mixture molecular weight for retrograde gas condensate reservoirs through a rapid and accurate manner. PSO is employed to determine the initial weights of the parameters used in the ANN model. The PSO-ANN model is tested using extensive field and laboratory data. Results from this study reveal that the developed model can predict the CGR with high accuracy. The developed model can serve as a reliable tool for quick and cheap but efficient estimation of CGR in the absence of sufficient laboratory or/and field data especially during the initial stages (e.g., determination of number of wells, well placement, and estimation of optimum production rate) of development of the gas condensate reservoirs.

Figure 1. A typical gas condensate phase envelope.

isothermal condensation due to the reduction of pressure below the dewpoint pressure of the original hydrocarbon mixture is called retrograde condensation.1−4,11 Gas condensate reservoirs generally produce in the range of 30−300 STB/MMSCF (standard barrels of liquid per million standard cubic feet of gas).5,12,15,23,33−38,41−47 The depth of majority of the reported retrograde gas condensate fields ranges from 4000−10,000 ft (∼1200−3000 m).5,12,15,23,33−38,41−47 Also, the ranges of pressure (P) and temperature (T) for this type of gas reservoirs are usually between 3000−8500 psi (20− 55 MPa) and 150−400 °F (65−200 °C), respectively.5,12,15,23,33−38,41−47 Clearly, few gas condensate reservoirs may be found out of the ranges reported above for T, P, and depth as gas condensate reservoirs may have pressures lower than 2000 psi and temperatures under l00 °F.47 The pressure and temperature values along with the wide range of compositions result in gas condensate mixtures that exhibit different and complex thermodynamic behaviors.1−4,10,11 Since the formation has a lower permeability to liquid and the ratio of liquid viscosity to gas viscosity is fairly high in the gascondensate reservoirs, the main portion of the condensed liquid in the reservoir is unrecoverable and considered as a condensate loss. Condensate loss is a main concern in gas reservoir engineering because the liquid condensate holds the more economically valuable intermediate and heavier components of the original hydrocarbons that are trapped in the reservoir.1−12 Characterizing gas-condensate systems is a complex task for researchers and subject matter experts, since multiphase flow in the formation and also variations of the fluid composition significantly obscure the analysis of well tests. In the research area of gas-condensate reservoirs, the topics such as well deliverability, PVT analysis, well test interpretation, multiphase flow, rock characterization, relative permeability model, viscous stripping of condensate and interblock non-Darcy flow have been the common problems for long time.1−15 2.1. Three Flow Regimes. According to Fevang (1995), three main different flow regimes can be identified in the hydrocarbon mixture flowing toward a production well in a gascondensate reservoir during depletion, as depicted in Figure 2:3,25 • Single phase gas (Region 3): A region that is far from the production well and has a pressure higher than the dewpoint value and thus only single gas phase exists in the region.

2. RETROGRADE GAS CONDENSATE RESERVOIRS A typical P-T phase diagram for gas condensate reservoirs (Figure 1) has a critical temperature lower than the reservoir temperature. However, its cricondentherm is larger than the reservoir temperature. Gas condensate systems commonly appear as a single gas phase in the reservoir at the time of discovery. As the reservoir is produced, the reservoir pressure declines from the reservoir to the production well and to the surface facilities, resulting in gas condensation to produce a liquid phase. This 3433

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

Figure 2. Three regions of condensate and gas flow behavior in a typical gas condensate reservoir (modified from Roussennac (2001)41).

• Condensate build-up (Region 2): A region where the reservoir pressure is below the dewpoint pressure. In this case, production of liquid condensate is observed in the reservoir. However, the condensate phase will not flow due to its low saturation value. Consequently, the flowing phase in this region is still composed of just the single gas phase. The flowing gas loses the heavier components in this region as the reservoir pressure decreases during the production process. • Near well (Region 1): Region 1 is an inner near-well part in which the reservoir pressure is more below the dewpoint. In this region, saturation of the liquid condensate exceeds the critical value and the condensate build-up turns out to be mobile. The gas phase mobility remains considerably low due to the presence of liquid condensate phase. In the near wellbore area (Region 1), two counter-acting forces dictate the degree to which the condensate phase can move.3,25,48−50 Viscous forces, which depend on viscosity and velocity of the gas can strip the condensate from the formation. The capillary pressure, which is a function of interfacial tension, is responsible for keeping the condensate in the pores of the rock. Experimental studies suggest that relative permeability and critical saturation of condensate change in terms of capillary number.3,25,48−50 The ratio of viscous forces to capillary forces has a significant effect on production rate when the values of condensate velocity and condensate saturation are fairly high. As the relative permeability of condensate phase is enhanced at greater gas velocities in this region, the critical saturation of condensate decreases significantly.3,25,48−50 Another important viewpoint here is that stripping by viscous forces affects condensate accumulation in Region 1, and this causes a lower liquid saturation around the wellbore.3,25,48−50 2.1.1. Condensate Blockage. Gas mobility has the maximum value further away from the wellbore where reservoir conditions dominate and gas saturation is 100 % as shown in Figure 2, Region 3. Gas mobility declines significantly in Region 2 where condensate accumulates and lower gas saturation is

observed. Region 1 shows an even lower gas saturation and higher liquid condensate saturation such that the liquid condensate is mobilized. Even if the condensate production is negligible, the gas mobility and consequently the well deliverability remain considerably low. This phenomenon is known as “condensate blockage”.3,25,41 The major resistance for gas to flow is in Region 1, and the blockage influence depends mostly on the relative permeability of the gas phase and the volume of Region 1. The gas flow resistance in Region 2 is much lower than that in Region 1. 2.1.2. Hydrocarbon Recovery and Composition Change. Composition of the gas varies as it moves toward Region 2 and the flowing gas becomes leaner of intermediate and heavy components (e.g., C4+) in the reservoir.3,25,41 Hence, the heavier components build up in Regions 1 and 2 as the reservoir pressure declines. Overall, the composition of the mixture everywhere in Region 1 or 2 contains heavier components when the reservoir pressure drops below the dewpoint. Region 1, near wellbore with significant liquid condensate saturation, can rapidly develop during production. This will result in producing leaner gas compared to the initial gas composition and leaving the intermediate and heavier components in Regions 1 and 2. 3,25,41 2.2. Production Methods and Pressure Maintenance. Production and/or recovery methods for gas condensate reservoirs usually include fast depletion, slow depletion, full dry gas cycling, partial dry gas cycling, seasonal dry gas cycling, noncondensable gas (NCG) injection, injection of tuned gas composition, and water injection.3,25,41−43 Most of these methods are employed for the purpose of pressure maintenance. In general, pressure maintenance processes are classified in four main groups: 1) an active water drive after modest decline of pressure from early production, 2) water injection techniques, 3) gas injection, or (4) combinations of these.3,25,41−43 As times goes on, some gas condensate reservoirs may experience thermodynamic conditions close to their critical points. In this case, reservoirs appear to be good candidates for some recovery methods such as injection of tuned gas compositions to offer miscibility and phase-transform processes that can enhance the recovery factor.3,25,41−43 3434

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

• It may not be possible to mobilize the gas previously trapped in the formation through a subsequent depressurization stage. • Well lift could be a serious problem if high water cuts before and during the blowdown are experienced. 2.2.3. Gas Cycling. In dry gas injection processes, the goal is to keep the reservoir pressure over or near the dewpoint in order to minimize the amount of condensate.3,25,41−43 Dry gas cycling is considered as a particular type of miscible displacement to improve recovery.3,25,41−43 This method is very efficient in terms of displacement efficiency and having economical condensate production rates, especially when there are no markets available for the produced gas. There are two main groups for dry gas cycling, namely full cycling and partial cycling.3,25,41−43 Full gas cycling is when all the dry gas separated at the surface is recycled into the formation. Such a cycling generally leads to a material balance deficiency as pressure may slowly reduce. Some dry gas can be added as make-up gas to the injection preventing this occurrence. Partial cycling provides lower pressure maintenance over the dewpoint than the full gas cycling method.3,25,41−43 A gas sales contract usually dictates the amount of dry gas for injection into the formation where any extra gas produced is re-injected. For example, the partial cycling in the form of a seasonal gas injection is able to provide more dry gas over the off-peak summer months if the reservoir can still be operated at its maximum production rate.3,25,41−43 Besides lowering the loss of condensate in the reservoir, the pressure maintenance resulting from gas cycling hinders water influx and the entrapment of wet gas with its restrained condensate in the rear of the advancing water front.3,25,41−43 The gas cycling process ends when gas breakthrough into the production wells is observed and the rate of condensate production is not economical. Dry gas cycling methods should be considered in terms of important features such as mobility ratio, heterogeneity, and gravity which mostly affect the recovery factor.3,25,41−43 The demand for dry gas generally controls the amount of dry gas available for gas cycling. Therefore, the main disadvantage of dry gas cycling is that the sale of produced gas should be postponed until condensate production declines to its economical limit. The use of inert gas to compensate the amount of gas required for gas cycling process is an economical option for pressure maintenance and recovery enhancement.3,25,41−43 The inert gas or noncondensable gas injection generally uses nitrogen, carbon dioxide, and combustion or flue gases. Nitrogen is the most useful gas for this type of cycling since it is noncorrosive and has a high compressibility factor, leading to requiring less volume. The most important factors affecting economic viability of a gas-cycling project are liquid content of reservoir, product prices, and costs of makeup gas. The utilization of inert gas cycling has some advantages and disadvantages. The key benefit is that inert gas cycling permits early sale of gas and condensate phases, leading to higher net income.3,25,41−43 In addition, a greater recovery of total hydrocarbons is obtained since the formation holds large amounts of nitrogen rather than hydrocarbon gas when the reservoir is abandoned. Operational problems and higher operating costs are the result of corrosion if combustion or flue gas is employed for the cycling process. Extra capital costs are needed to separate the inert gas from the sales gas, to pretreat prior to compression, and to provide surface reinjection facilities. Also, premature breakthrough of inert gas due to high degrees of reservoir heterogeneity causes

2.2.1. Depletion. Natural depletion in the form of fast and slow depletions involves just production. No dry or residue gas is recycled to the reservoir. Condensate recovery is effective just at the beginning of production when the produced mixture holds all original components of liquefiable hydrocarbons.3,25,41−43 The reservoir pressure is high enough (greater than the dewpoint pressure) as there is only gas phase in the formation at this stage, referred to as “fast depletion”. As the reservoir produces hydrocarbons and the pressure drops below the dewpoint, the retrograde condensation process starts. The heavier components of hydrocarbons mixture condense in the reservoir. The percentage of condensate phase is variable and depends on the composition of the mixture.3,25,41−43 Recovery of the liquid condensates at lower pressures is not guaranteed. Below the dewpoint, the relative permeability of gas phase declines causing slow depletion of the fluids. Methods have been developed in order to maintain pressure, enhance the recovery, and finally produce condensate at the surface. Advantages and disadvantages of depletion processes include the following:3,25,41−43 Advantages: • High recovery factors for gas under solution gas regimes • Low cost for reservoir development • Reservoir studies with fairly low cost needed Disadvantages: • Low liquid recovery efficiencies and almost irreversible loss of retrograde condensate which is left in the formation • Reduction in well production rates when reservoir pressure declines and retrograde condensation happens in the bottom hole • Extra compression needed as reservoir pressure is reduced • Difficulty in managing the production process when there is an active water drive in the reservoir • Difficulty in managing production wells operating under notably declined reservoir pressure 2.2.2. Water Injection. Only a few gas condensate reservoirs are reported to operate under natural water drive (aquifer expansion).3,25,41−43 A strong water drive is needed in order to maintain high enough pressure to reduce condensates from forming and be economical. Therefore, a careful process design and an appropriate economic analysis should be implemented if a water injection process is selected as a production method or as a technique for pressure maintenance.3,25,41−43 Other important aspects such as displacement efficiency of water pushing gas, bypassing potential, and loss of condensate phase should be considered when water breakthrough occurs via permeable zones. The mobility ratio for water flooding is satisfactorily low, leading to attaining high areal sweep efficiencies.3,25,41−43 The presence of layers with very different permeabilities in a formation may cause early water breakthrough in the production well, resulting in premature abandonment of the production plan.3,25,41−43 In general, the following concerns are present while employing water injection method for gas condensate reservoirs:3,25,41−43 • The utilization of water injection in a massively heterogeneous gas-condensate reservoir is doubtful to be possible technically and economically due to early breakthrough and consequently loss of productivity. • Up to 50% of the gas might be bypassed and trapped in the formation. 3435

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

additional operating expenses to find marketable gas sales. Some other shortcomings for all types of gas cycling processes include the following: 1) Gas injection is an expensive process; 2) It takes too long to reach the ultimate recovery of the field; 3) Production rate and recovery factor of gas are fairly low; therefore, gas production is less beneficial; 4) Heavy condensate fractions of hydrocarbons are precipitated in bottom hole regions; 5) Economic efficiency is poor when active water drive happens; and 6) Reservoir studies are difficult and costly.3,25,41−43

thermodynamic parameters over wide ranges of compositions, temperatures, and pressures with an acceptable accuracy. Some of the available analytical methods are including EOS models, material balance methods, and PVT delumping techniques.9,42,43,51−56 3.4. Numerical Modeling. Numerical simulations are another tool to predict CGR. An accurate numerical method to compute the CGR and well productivity is a fine-grid system. A fine-grid model has the ability to consider high-velocity effects, and most commercial simulators now incorporate options to consider non-Darcy flow and complicated phase behavior in different regions of gas-condensate reservoirs. Although, numerical simulations are proper for forecasting of reservoir behavior in detail, simpler and faster engineering calculations look more appropriate for some applications such as CGR prediction.42,43 For instance, the book titled ‘The Practice of Reservoir Engineering’ presents some simple theoretical equations and forecasting methods to compute CGR value and other important parameters (i.e., sweep efficiency, relative permeability and two-phase Z-factor) in gas condensate reservoirs when the gas cycling technique is employed.43 It should be also noted here that a strong understanding of the condensate banking behavior in gas condensate reservoirs is required for analysis of well test data and prediction of production performance using numerical methods.7,8,10,22,41−43

3. CGR DETERMINATION TECHNIQUES The value of CGR in a gas-condensate reservoir depends on various factors such as API gravity, gas composition, pressure, and temperature. The preference is to determine CGR using detailed compositional tests if adequate experimental facilities and samples are available and also CGR values with high accuracy are required. However, it is usually common in PVT studies to develop CGR empirical correlations, equation of state (EOS), and numerical models due to the large number of parameters and the complexity of the experiments to obtain CGR. It should be noted here that empirical and theoretical techniques such EOS and intelligent methods usually have difficulties to predict thermodynamic properties around critical point with high precision. Therefore, the accuracy of available empirical relationships strongly depends on the quality of the samples and the proximity of reservoir conditions (e.g., P and T) to the fluid critical point.41−43 In this section, some of the techniques used for CGR determination are described, briefly. 3.1. Laboratory Methods. There are two main experimental methodologies for measuring CGR, namely constant composition expansion (CCE) and constant volume depletion (CVD). CCE is primarily used to determine the dewpoint of hydrocarbon mixtures and also CGR. In addition, the CVD method is conducted in order to simulate the behavior of reservoir production. Flash vaporization is another name for CCE and is simulated by expanding a mixture with a certain composition (zi) in sequence steps of pressure reduction. The technical reader is encouraged to read refs 16 and 17 for a more detailed description of the experimental procedures. 3.2. Empirical Correlations. Several empirical correlations were proposed for CGR prediction in the literature.33−38,44 Some field data such as reservoir pressure (P), reservoir temperature (T), and fluids specific gravity are needed to develop PVT empirical correlations. For instance, Cho et al. (1985) developed an empirical correlation to estimate maximum CGR value for retrograde gas-condensate reservoirs.44 Their correlation is expressed in terms of reservoir temperature and the mole fraction of heptanes-plus. Reliable experimental data are favored versus the results obtained from the correlations when assessing gas-condensate reservoirs.33−38,44 Nevertheless, accurate experimental data often are not available. One advantage of correlations is being helpful when only modest experimental information is available. 3.3. Analytical Methods. Analytical methods are usually based on EOS to evaluate CGR for gas-condensate reservoirs. This technique is very efficient to describe the phase behavior of well characterized mixtures at particular temperatures. However, the EOS should be tuned for each mixture composition and has predictive potential limitations for new compositions. Therefore, an EOS method requires a detailed description of the mixture’s composition. Obtaining such quantities is expensive and timeconsuming. Furthermore, there is no universal EOS available for hydrocarbon mixtures, which is able to predict CGR and other

4. EXPERIMENTAL PROCEDURE AND DATA COLLECTION The CGR of the hydrocarbon mixtures from the South Pars/North Dame gas field in Iran was measured by CCE and CVD methods and an average value of CGR was used for the intelligent model developed in this paper. The South Pars natural gas-condensate field is located in the Persian Gulf. It is the world’s largest single gas field, shared between Iran and Qatar. The field is composed of two independent gas-bearing formations, namely Kangan and Upper Dalan. Major gas reservoirs in this field mainly consist of limestone and dolomite. Both porosity (ϕ) and permeability (k) vary widely in the field. For instance, the porosity varies from 0−35% and the permeability from zero to 1000 mD in some regions of this gigantic gas field. In addition to the data generated from the experimental part, some data from the literature were also used in developing the PSO-ANN model for CGR prediction. A part of the data used in this study is presented in Appendix A, and the rest can be tracked in the open literature.5,12,15,23,33−38,41−85

5. ARTIFICIAL NEURAL NETWORKS THEORY Initially, McCulloch and Pitts (1943) developed the artificial neural network (ANN) theory as a mathematical technique.86 ANNs are biological inspirations on the basis of different brain functionality features.39,40,86 In recent years, ANN has been broadly employed for numerous applications such as process control, behavior prediction, model recognition, system classification, communication, and vision aspects. In this theory, ANN is generally trained to solve nonlinear and complex problems whose mathematical modeling appears to be a difficult job.39,40,86 The ANNs can take some information of a complicated system and provide an adequate model even when the data are noise infected and incomplete. The objective of an ANN model is to correlate a set of input data to a set of output patterns in a process. The network carries out this mapping scheme through training from some previous examples and also defining the input and output parameters for a particular system. ANNs contain many computational sections joined together in a fairly complicated parallel structure. In the processing units, there are a great number of simple components 3436

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

called neurons that behave like axons to obtain an empirical correlation between the inputs and outputs of a given process.39,40,86 Figure 3 presents a classical interconnected neural network in the form of a multiple-layers structure. The network is com-

Figure 3. Architecture of three layers in an ANN model.30,31 Figure 4. A basic procedure for the PSO approach.

posed of one input layer, one hidden layer, and one output layer with various duties. Each linking line holds a certain weight. ANNs are trained via adjustment of the connection weights as the desired values for output parameters are estimated with an acceptable accuracy.30,31,86−89 The main feature of a neural network is the training stage according to a set of numerical magnitudes measured in the modeling process. The precision of the model representative is strongly dependent on the topology (structure) type of the neural network.30,31,86−89 There are a number of different neural network structures and learning algorithms for the ANN technique. The back-propagation (BP) learning algorithm is recognized as one of the most general algorithms. This learning algorithm was selected in this study to predict the magnitude of condensate-to-gas ratio (CGR).30,31 BP is a multiple-layer feedforward network that includes hidden layers linking the input parameters to the output.30,31,39,40 The simplest accomplishment of back-propagation (BP) training is the biases and neural network weights updates toward negative gradients.30,31,39,40 The ANNs generally try to identify the data patterns throughout the training process. ANNs obtain the flexibility during the process letting the analyst operator accomplish testing the data patterns even if the work environment in the model is changed. Although ANN might take some time to learn an impulsive alteration, it is exceptional to adjust continuously varying information.30,31,86−89

magnitude of position achieved by any particle in the population is called global best. In the period of iteration process, every particle experiences a tendency toward its own personal best and also in the direction of the global best. This is determined through calculation of a new velocity term for each particle, based on the distance from its personal best position and also its global best position. Random weights to personal best and global position velocities are then assigned to generate the new value for the particle velocity that shows effects on the next position of the particle in the next iteration.90−93 Particle fitness mentioned in Figure 4 is a measure of how good the solution characterized by position is. For minimization problems that are the most typical categories of cases solved by PSO, lower magnitudes of the fitness field are better than higher values. However, greater values of the fitness are superior for maximization problems. The position of a particular particle in a PSO-ANN model indicates a series of weights for the current iteration.90,91 The dimension of each particle is equal to the number of weights corresponding to the network. Using mean squared error (MSE), the learning error of the network is calculated. The mean squared error of the network (MSEnet) is defined as MSEnet =

1 2

G

m

∑ ∑ [Yj(k) − Tj(k)]2 k=1 j=1

(1)

where m is the number of output nodes, G is the number of training samples, Yj(k) is the expected output, and Tj(k) is the actual output. When MSEnet approaches to zero, the error of our network would decrease. The most important advantage of the PSO approach in comparison to other global minimization techniques like simulated annealing is that many members contributing in the PSO model cause the technique remarkably flexible as the PSO method prevents the neural network model from being stuck on local minima.90−93 It should be also noted here that the particle swarm optimization (PSO) model is similar to the genetic algorithm (GA) model; however, the PSO employs a collaborative approach rather than a competitive one. 6.1. Training Using Particle Swarm Optimization. Given a fixed architecture of ANN, all weights in connections

6. PARTICLE SWARM OPTIMIZATION Particle swarm optimization (PSO) is a global optimization algorithm for the problems in which a point or surface in a multidimensional space represents the best solution.90,91 As shown in Figure 4, PSO is initialized with a set of random particles. Then, random positions and velocities are assigned to the particles. After that, the algorithm starts searching for optima through a number of iterations where the particles are flying in a hyperspace to find the potential solutions. As time goes on, the particles learn to respond appropriately, based on their experience and also the experience of other particles in their set.90−93 Every particle maintains the path of its best position that has obtained in the hyperspace to date.90,91 The value of best position is named personal best. The overall best 3437

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

parameters and other two performance measures (e.g., mean absolute error (MEAE) and maximum absolute error (MAAE)) are presented in Appendix B. It should be noted here that as MEAE and MAAE become lower, better performance for the developed neural networks is expected. To calibrate the PSO-ANN model, three layers were used in this study. The number of input neurons is a function of the number of precursor data, and there is just one parameter (neuron) in the output layer. PSO-ANN was trained and then tested to determine the best structure using different data sets as follows: a) c1 and c2 values were changed from 1.4 to 2.4. b) The values of 0.005, 0.010, 0.015, 0.020, 0.025, and 0.030 were tested for time interval. c) Number of particles varied between 15 and 25 in the training mechanism. d) Number of hidden neurons examined in this study was 3, 5, 7, and 10. e) Maximum iterations were examined between 300 and 600. 6.3. Selection of Input Variables. In general, ANN works as a black box model that relates inputs variables and outputs parameters, which represents a mathematical relationship. It is crucial that key physical parameters contributing in the output of ANN models are used. Also, it is desirable to minimize the number of inputs for an ANN system. Selection of the best inputs is based on the understanding of physics of the problem. As previously mentioned, experimental works, numerical modeling, empirical equations, and PVT studies of gas condensate reservoirs have proved that CGR is a function of temperature, pressure, and composition of fluid mixture in the reservoir. To take into account the effect of mixture composition, some of the published research studies used molecular weight of C7+.41−43 In addition, there are some studies in the open literature that considered molecular weight of C30+ in their computations.41−43,60−85 In order to select the best inputs for the ANN method, the following sets of input variables were examined as shown in Table 1: 1) T, Pd, MwC30+; 2) T, Pd, MwC7+; 3) T, Pd, mixture composition, and 4) T, Pd, Mw of the mixture.

can be coded as a genotype and applied in the PSO algorithm in order to train the network where the fitness function should be the mean squared error of the network.90,91 Using validation and testing sets, some changes can be made to obtain better fitness magnitudes with more generalization characteristics. To train the network by the particle swarm optimization, the following equations are used90−93 Vit + 1 = ωVit + c1rit(Pit − Xit ) + c2r2t(Pgt − Xit )

(2)

Xit + 1 = Xit + Vit + 1

(3)

Vti

where is defined as the velocity vector at iteration t, r1 and r2 indicate random numbers in the range [0,1], Ptg represents the best ever particle position of particle i, and Pti refers to the global best position in the swarm until iteration t.90,91 Other parameters in the above equations are problem dependent variables. For instance, c1 and c2 denote ‘‘trust’’ parameters indicating how much confidence the current particle c1 as a cognitive parameter has in itself and how much confidence c2 (social parameter) has in the swarm. ω represents the inertia weight. The equations describe how the particle velocity value and location are varying in an iteration procedure.90,91 During the training process, the above equations will customize the network weights until a stopping condition is satisfied. In this case, a lower MSE is achieved; however, a number of maximum iterations is used to end the algorithm if the examined stopping condition does not lead to termination in an appropriate time. Generally, the inertia weight (ω) decreases linearly with the iteration number as follows90−93 ⎛ ω − ωmin ⎞ ωt = ωmax − ⎜ max ⎟t tmax ⎝ ⎠

(4)

where ωmin and ωmax are the initial and final values of the inertia weight, respectively, t is the current iteration number, and tmax is the maximum number of iterations used in PSO. The values of ωmax = 0.9 and ωmin = 0.4 are the proper value through empirical studies.90,91 6.2. PSO-ANN Model Development. A variety of models were calibrated to find the optimum configuration of the PSOANN algorithm. A PSO-ANN structure with appropriate parameters can exhibit a good performance.90−93 To design a proper network, six (6) parameters were studied as given below: a) Acceleration constant for global best (c1), b) Acceleration constant for personal best (c2), c) Time interval, d) Number of particles, e) Number of maximum iterations corresponding to a low MSE that refers to the stopping condition, and f) Number of hidden neurons The constant for global best (c1), constant for personal best (c2), time interval, and number of particles have significant effects on the optimal configuration of the PSO-ANN model.90−93 In addition, other parameters namely, maximum iteration, number of neurons in the hidden layer, and length of input and output data, are examined for calibration and optimization of the model. The objective function employed here is mean squared error (MSE). The optimization step is vital to make sure that MSE or training error is decreasing with an increase in the number of iterations. Performance of the PSO-ANN is assessed using coefficient of determination (R2) and Nash-Sutcliffe coefficient (E2). If the values of R2 and E2 approach 1, the selected network would be a perfect model to predict the system behavior. The corresponding formulas for the above statistical

Table 1. Performance of PSO-ANN for Various Input Data of Variables statistical parameter

input data set no. 1

R2 E2 MSE MEAE (%) MAAE (%)

0.8995 0.8859 0.0976 12.1133 15.6432

input data set input data set input data set no. 2 no. 3 no. 4 0.9108 0.9036 0.0777 11.2461 13.5488

0.9447 0.9355 0.0541 7.7733 9.0064

0.9705 0.9548 0.0438 6.0004 8.1192

Input data set no. 3 that includes mole (or mass) fraction of all components in the mixture seems appropriate for the PSOANN model; however, it makes the computation very timeconsuming. On the other hand, it is difficult to construct such a large data center in the neural network system, and many input data sometime cause errors in calculation of the desired parameter. However, with input data set no. 4, we consider the effect of mass or mole percentages of all components by selecting molecular weight and dewpoint pressure of the mixture. As shown in Table 1, input data set no. 4 among the selected input data sets exhibits the best performance for the PSO-ANN. 3438

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

6.4. Data and Model Evaluation. Availability of reliable data is very important in the design of any ANN system. The first step which should be considered in an ANN system is finding parameters that affect the target parameter, considerably. CGR is a strong function of reservoir fluids properties. It is clear that reservoir characterizations (e.g., depth and permeability) dictate pressure distribution, local compositions, and temperature in gas condensate reservoirs that affect instantaneous values of CGR. Considering effects of reservoir characteristics, three main parameters that have direct influences on the amount of CGR include the dewpoint pressure (Pd), the reservoir temperature (T), and molecular weight of mixture (Mw).10−13,18−25 A set of 295 experimental data was collected to develop and examine the proposed PSO-ANN model. The data series includes 45 data from South Pars gas-condensate reservoir and 250 data from the literature.5,12,15,23,33−38,41−85 The ranges of the parameters used to develop the PSO-ANN are given in Table 2.

algorithm. The goal from implementation of the PSO was to minimize the cost function. Every weight in the network was initially set in the range of [−1, 1] and every initial particle was a set of weights generated randomly in the range of [−1, 1]. For example, the initial and final values of the inertia (ωmin and ωmax) were 0.4 and 0.5, respectively; r1 and r2 were also two random numbers in the range of [0, 1]. The PSO-ANN model was evaluated with 3, 5, 7, 8, and 10 hidden neurons in the hidden layer. The results obtained using different numbers of hidden neurons are shown in Table 3. The accuracy of the simulations was increased with increase in the number of hidden neurons from 3 to 8. At 3, 5, and 7 hidden neurons, the network did not appear to have enough degrees of freedom to learn the process accurately. However, the performance of PSO-ANN began to decline at 10 hidden neurons. Therefore, the optimum value for hidden neurons is 8 in the PSO-ANN model, because it takes a long time for the PSO-ANN system to be trained at 10 hidden neurons and the model may over fit the data. The acceleration constants (c1 and c2) correspond to the stochastic acceleration that pushes each particle toward personal best and global best positions.91−93 The c1 constant determines the effect of the global best solution over the particle, while the c2 constant describes how much the effect of personal best solution has over the particle. Reasonable results were obtained when the PSO-ANN was trained with acceleration constants (c1 and c2) in the range of 1.4−2.4. Performance of the PSO-ANN was improved as the values of c1 and c2 increased from 1.4 to 2.0. The main reason is that the particles are attracted toward personal best and global best with higher acceleration and abrupt movements when c1 and c2 values experience an increase from 1.4 to 2.0. Afterward, the PSO-ANN performance reduced as c1 and c2 values increased from 2.2 to 2.4. This trend indicates that the PSO-ANN is not able to converge with higher magnitudes of acceleration constants. Thus, the best c1 and c2 values required for an optimal structure of the PSO-ANN model were 2 where R2 and E2 were determined to be 0.9789 and 0.9581 for the training and 0.9087 and 0. 8563 for the testing process, respectively. In addition, the minimum values of MSE, MEAE, and MAAE were obtained for both training and testing processes at c1 and c2 values equal to 2. The results for the PSO-ANN trained with different values of c1 and c2 are listed in Table 4. Number of particles in the PSO-ANN dictates the quantity of the space covered in the problem. Table 5 shows the effect of number of particles on the performance of the developed intelligent model. As seen in Table 5, the PSO-ANN performance increases with an increase in the number of particles from 15 to 21. The developed neural network model was not able to describe well the data behavior at numbers of particles equal to 15 and 18 because the space covered in the problem was not adequate for these numbers. Then, a decrease in the PSO-ANN was observed when the number of particles

Table 2. Ranges of the Parameters Used to Estimate the CGR input

output

property

min

max

temperature (T), °F dewpoint pressure (Pd), psi molecular weight (Mw), g/mol condensate-to-gas ratio (CGR), STB/MMSCF

152 2950 16.5 4.5

355 8642 44 293

It should be noted here that Table 2 covers wide ranges of important parameters contributing in CGR. The above ranges are applicable for the most of real gas condensate cases encountered. If a data point is selected very far from the above intervals, the methodology to implement the PSO-ANN does not change. However, some parameters of the PSO-ANN such as number of hidden neurons and number of particles should be adjusted in a way that an acceptable match is obtained. Another alternative for such a case is adding a wavelet function to the neural network structure in order to increase accuracy of the predictions.88−93

7. RESULTS AND DISCUSSION Precise determination of the CGR and prediction of its variations are central in developing a proper PVT model for utilization in reservoir engineering simulators. Development of an artificial neural network (ANN) is discussed here which was used to build a model to predict the amount of condensate-togas ratio. The ANN model trained with back-propagation network employed the Levenberg−Marquardt algorithm for prediction of the CGR. The transfer functions in hidden and output layers were sigmoid and linear, respectively. PSO was used as a neural network optimization algorithm, and the mean square error (MSE) was used as a cost function in this

Table 3. Performance of PSO-ANN Based on the Number of Hidden Neurons training

testing

number of hidden neurons

R2

E2

MSE

MEAE (%)

MAAE (%)

R2

E2

MSE

MEAE (%)

MAAE (%)

3 5 7 8 10

0.8253 0.8724 0.9311 0.9875 0.9227

0.8244 0.8609 0.9038 0.9563 0.9012

0.0823 0.0546 0.0203 0.0116 0.0478

8.6542 7.8479 6.4563 4.3421 7.7534

13.7561 11.8823 9.6542 7.6542 9.899

0.7520 0.8778 0.9151 0.9601 0.8831

0.7659 0.8689 0.9092 0.9587 0.8969

0.0998 0.0841 0.0314 0.0223 0.0573

10.4588 9.0087 8.1132 6.7681 9.2694

15.7865 13.9889 11.6667 8.9326 12.6432

3439

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

Table 4. Performance of the PSO-ANN Model for Various Magnitudes of c1 and c2 training

testing

c1 and c2 values

R2

E2

MSE

MEAE (%)

MAAE (%)

R2

E2

MSE

MEAE (%)

MAAE (%)

1.4 1.6 1.8 2.0 2.2 2.4

0.9344 0.9498 0.9559 0.9789 0.9673 0.9514

0.8894 0.9126 0.9304 0.9581 0.9402 0.9375

0.0784 0.0679 0.0489 0.0206 0.0341 0.0452

8.9053 7.0542 6.1123 4.7659 5.9978 7.4321

12.8762 11.3453 10.6402 8.6574 9.9897 10.9769

0.8254 0.8409 0.8723 0.9087 0.8730 0.8541

0.7945 0.8131 0.8389 0.8563 0.8379 0.8231

0.0875 0.0764 0.0583 0.0324 0.0461 0.0584

9.8144 8.1635 7.1234 5.8548 6.8863 8.5627

13.8956 12.4364 11.5763 8.7423 10.4421 11.7676

Table 5. Effect of Number of Particles on Performance of the PSO-ANN training 2

number of particles

R

15 18 20 21 22 25

0.9694 0.9367 0.9711 0.9897 0.9706 0.9719

2

testing 2

2

E

MSE

MEAE (%)

MAAE (%)

R

0.9678 0.9311 0.9697 0.9789 0.9682 0.9696

0.0875 0.0903 0.0869 0.0764 0.0964 0.0947

7.4468 9.1563 6.5422 4.9977 6.8863 6.7791

12.7845 13.5473 10.9863 8.6451 9.4821 9.3789

0.8965 0.8634 0.9249 0.9354 0.9146 0.9156

E

MSE

MEAE (%)

MAAE (%)

0.8911 0.8549 0.9167 0.9231 0.9078 0.9110

0.0858 0.0917 0.0826 0.0787 0.0955 0.0949

9.5476 11.0765 9.2254 6.1066 8.7789 8.5673

14.8754 15.4353 12.9863 9.1234 11.5462 11.2751

Table 6. Effect of Sampling Size on Performance of the Neural Network Model training 2

number of training samples

R

50 100 150 200 250 300 400 500

0.9213 0.9304 0.9486 0.9565 0.9633 0.9649 0.9654 0.9641

2

testing 2

E

MSE

MEAE (%)

MAAE (%)

R

0.9189 0.9279 0.9382 0.9496 0.9571 0.9581 0.9588 0.9576

0.075 0.065 0.055 0.048 0.044 0.043 0.042 0.042

9.6045 8.4235 7.2243 6.3345 5.4366 5.3879 5.4107 5.4482

12.8256 10.6446 9.1455 8.4261 7.3422 7.5611 7.6423 7.6988

0.9112 0.9201 0.9383 0.9461 0.9628 0.9644 0.9649 0.9636

was set on 22 (see Table 5). The main reason behind this decline is that the space covered in the problem appears to be too wide. The model efficiency was improved slightly when the number increased to 25. Therefore, it can be concluded that the optimum number of particles is 21. It should be also noted here that as the number of particles increases, the greater amount of space is covered in the problem, leading to a slower optimization process. The same procedure was followed for other parameters of the network, and the optimal configuration for the PSO-ANN network was obtained as the following: a) Number of hidden neurons = 8, b) Number of maximum iterations = 500, c) c1 and c2 = 2, d) Time interval = 0.0100, and e) Number of particles = 21.

2

E

MSE (%)

MEAE

MAAE (%)

0.9098 0.9187 0.92883 0.9402 0.9475 0.9485 0.9492 0.9480

0.0787 0.0683 0.0578 0.0504 0.0462 0.0451 0.0441 0.0449

10.1807 8.9289 7.6577 6.7145 5.7627 5.7111 5.7353 5.7750

13.9879 11.6002 9.9607 9.2004 8.1560 8.2683 8.3576 8.4105

Once the factor selection and neural network architecture were determined, different sampling sizes were employed to train while the testing phase of the PSO-ANN was being conducted using 45 real data of the South Pars gas field. The size of training data was selected between 50 and 500 data. Table 6 shows the results of the network performance based on the statistical investigations. Only performance of the training process in terms of MSE is shown in Figure 5. According to Table 6 and Figure 5, it can be concluded that a sample of 250 data is sufficient for training the PSO-ANN system. Therefore, 250 data points were chosen for the purpose of network training in this study. The remaining 45 samples were put aside to be employed for testing and validating the network’s integrity and robustness. In order to evaluate the performance of the PSO-ANN algorithm, a back-propagation neural network (BP-ANN) was applied with the same data sets used in the PSO-ANN model. Predicted and measured normalized CGR values at the training and validation phases for the both BP-ANN and PSO-ANN models were compared as shown in Figures 6 and 7. The input variables of an ANN are frequently of various types with different orders of magnitudes, such as temperature (T) and dewpoint pressure (Pd) in the case studied here. The same thing usually occurs for the output parameters. Thus, it is necessary to normalize the input and output parameters based on the ranges of data as they fall within a particular range. For instance, setting minimum −1 and the maximum +1 to have all the data within the interval [−1, 1]. It should be mentioned here that the legend of the vertical axis in Figures 6 and 7

Based on the above, the best ANN architecture is 3−8−1 (3 input units, 8 neurons in the hidden layer, and 1 output neuron). A common question in neural networks is what is the sample size of data required to train the network? No simple procedures have been proposed to answer this question. Certainly, it requires adequately large and representative data for the neural network to learn. However, the number of samples for training depends on several parameters such as factor selection and the structure of the neural network model.88−91 3440

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

Figure 5. Training performance (MSE) vs sampling size.

Figure 7. Measured vs predicted CGR using the PSO-ANN model: a) training and b) testing.

Table 7. Performances of the PSO-ANN Model, the BPANN Model, and a Correlation Proposed by Fattah et al. (2009)38

presents normalized CGR that was calculated using the following expression 2(CGR − CGR min) −1 (CGR max − CGR min)

PSO-ANN

BP-ANN

Fattah et al. correlation

MSE MEAE (%) MAAE (%) R2 E2

0.0617 2.4503 6.9114 0.9721 0.9497

0.0981 6.7864 10.6875 0.8986 0.8878

0.1054 7.6654 11.7112 0.8579 0.8432

As seen in Figure 7, there is a good agreement between the model outputs and the experimental CGR, based on the testing data simulated at this phase of the study. The simulation performance of the PSO-ANN model was evaluated on the basis of mean squared error (MSE), mean absolute error (MEAE), maximum absolute error (MAAE), and efficiency coefficient (R2). The parameters MSE = 0.0617 and R2 = 0.9721 for PSO-ANN in contrast with the MSE = 0.0981 and R2 = 0.8986 for BP-ANN reveals that the PSO-ANN exhibits a better performance in prediction of CGR than BP-ANN (Figures 8 and 9). It should be noted here that an R2 value greater than 0.9 generally indicates a very satisfactory model performance; while an R2 magnitude in the range of 0.8−0.9 presents a good performance and a value less than 0.8 shows an unsatisfactory model performance, based on the statistical analysis.94,95

Figure 6. Measured vs predicted CGR using the BP-ANN model: a) training and b) testing.

Normalized CGR =

statistical parameter

(5)

where CGRmin and CGRmax are the minimum and maximum CGR of the data used in this study, respectively. PSO-ANN was trained by 50 generations followed by a BP training procedure. The values of learning coefficient 0.7 and momentum correction factor 0.001 were used for the backpropagation training algorithm. 3441

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

The extent of the match between the measured and predicted CGR values by BP-ANN and PSO-ANN networks in term of a scatter diagram are shown in Figures 8 and 9.

Figure 9. Performance of the developed PSO-ANN model based on R2: a) training and b) testing.

Figure 8. Performance of the developed BP-ANN model based on R2: a) training and b) testing.

These results convey the message that PSO-ANN has the capability to avoid being trapped in local minima. This advantage is due to the combination of global searching ability of PSO and local searching ability of BP. The performance plots of the BP-ANN and developed PSOANN models are depicted in Figures 10 and 11, respectively. The curves included in the figures present the relationship between the testing, training, validation, and best models introduced for predicting CGR in terms of MSE versus number of epochs. As shown with green circles on Figures 10 and 11, the best performance for the validation phase is 0.0022 at epoch about 3 for the BP-ANN model, while the PSO-ANN model experiences the best performance (MSE ≈ 0.0015) for the validation phase at epoch of 13. A comparison between CGR values obtained from a correlation introduced by Fattah et al. (2009)38 and the proposed PSO-ANN and BP-ANN models is presented here to examine the performance of the PSO-ANN model. Table 7 shows the MSE, MEAE, MAAE, E2, and R2 values for these different approaches in the validation phase. It can be concluded that the performance of PSO-ANN is better than the BP-ANN model and also the empirical correlation. An ANN is a learning structure in which the network is trained by information of a certain process or system. In general, this training stage needs considerably more processing time than what is required to evaluate the system using a single simulation

Figure 10. Performance plot of the proposed BP-ANN model in terms of number of epochs.

run or an empirical equation. In Table 8, the training time is recorded for both the 39-bus system and the 1844-bus system, employed for the neural networks. For both systems, 250 sample data were used for training the BP-ANN and PSO-ANN. The computations were conducted using a HP laptop, Intel Pentium Dual CPU E2200 @ 2.8 GHz. Table 8 indicates that the practical system (1844-bus) needs higher training time than the 39-bus system as more input data were considered in the practical 3442

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

Figure 12. Relative effects of important independent variables on CGR.

Figure 11. Performance plot of the proposed PSO-ANN model in terms of number of epochs.

the mixture molecular weight is the most important parameter affecting the CGR. An increase in the molecular weight leads to an increase in the CGR value. This finding is in agreement with several previously published research.33−38,41−85 In addition, impact of dewpoint pressure on the CGR supports the validity of the empirical correlations available in the literature.33−38,41−85 Increase in dewpoint pressure results in an increase in the value of CGR. It should be also noted here that as the reservoir temperature increases, the magnitude of the CGR increases. The CGR for a particular gas sample of South Pars field was obtained 31.8 and 31.5 STB/MMSCF based on the laboratory work and the developed PSO-ANN model, respectively. The small difference between the experimental and modeling results indicates the capability and high accuracy of the proposed PSO-ANN model in prediction of the CGR.

Table 8. CPU Time for the Neural Networks Employed in the Study type of neural network BP-ANN PSO-ANN

type of system

training CPU time (s)

39-bus system 1844-bus system 39-bus system 1844-bus system

67 1924 139 3895

system. Also, the table shows that the training process for the BPANN is faster than that for the PSO-ANN model. The main benefit of applying computational intelligence technology to substitute the conventional experimental and numerical methods for calculation of CGR is its possibility of real-time operation. ANN has the ability to satisfactorily shrink what it learns in training process and broaden this to generate acceptable outputs for those inputs not dealt over the training stage. Once trained, the ANN system can forecast the value of CGR for a given set of input data very fast as the computation in an ANN model does not involve any iteration which is the case in numerical methods. Table 9 shows the comparison of

8. CONCLUSIONS An artificial neural network intelligent system optimized with particle swarm optimization was developed and tested using the experimental data from South Pars gas field and the literature to predict condensate-to-gas ratio (CGR). Based on the results of this study, the following main conclusions can be drawn: 1- For a particular sample of South Pars field, the CGR values obtained from the experimental work and PSO-ANN are 31.8 and 31.5 STB/MMSCF, respectively. An acceptable agreement between these two determination approaches was observed. 2- Prediction of CGR was made possible using PSO-ANN, BP-ANN, and an empirical correlation. Comparing the results of this study indicates that predictive performance of the proposed PSO-ANN is better than the BP-ANN model and also the empirical correlation used in this paper. 3- For predicting the CGR, the PSO-ANN model has the capability to avoid being trapped in local optimums due to the combination of global searching ability of PSO and local searching ability of BP. 4- Finding an optimum structure for the PSO-ANN model is an important feature in CGR prediction. The optimum configuration for the proposed model includes 8 hidden neurons and the acceleration constants (c1 and c2) equal to 2 determined by using a trial and error technique. 5- A sensitivity analysis using the ANOVA technique showed that molecular weight of mixture has the most

Table 9. Comparison of the Computation Time Cost for the BP-ANN and PSO-ANN Techniques type of neural network BP-ANN PSO-ANN

type of system

CPU time (s)

39-bus system 1844-bus system 39-bus system 1844-bus system

0.05 0.26 0.08 0.32

the computational speed of the BP-ANN and PSO-ANN. Based on the results presented in the table, using the BP-ANN to estimate CGR is a little faster than the method; however, the PSO-ANN model exhibits a higher accuracy in estimating CGR. A sensitivity analysis was conducted for the newly developed model using the analysis of variance (ANOVA) technique.94,95 The dependence of the dependent variable (CGR) on each of the independent variables was examined using the ANOVA technique. The results of the sensitivity analysis are presented in Figure 12. The higher correlation between any input variable and the output variable indicates greater significance of the variable on the magnitude of the dependent parameter. Clearly, 3443

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels



Article

Table A1. Experimental Data for CGR from Various Research Studies

effect on the magnitude of the CGR predicted. Also, as the temperature increases, the CGR value increases. 6- Although the proposed ANN-PSO model works well for CGR prediction, it suffers from early convergence by the degeneracy of several dimensions; even if no local optima exist for cases considered with this model. Hence, a better evolutionary algorithm is required to be combined with PSO and ANN which is a part of our future study.

APPENDIX A A part of experimental data employed in this study to develop the PSO-ANN model is listed in Table A1. To build the training network, a large volume of the CGR data available in the literature was used to obtain an appropriate neural network model.



APPENDIX B R2, E2, MSE, MEAE, and MAAE are the statistical parameters to test the accuracy of the PSO-ANN model compared with common correlations. The equations to compute the above parameters and also the corresponding description are as follows: R2 is a statistic parameter which gives information on goodness of fit in a model. In regression analysis or fitting of a model, the R2 coefficient is a statistical measure of how well the model line approximates the real data points. An R2 of 1.0 indicates the model line perfectly fits the data. The following equation shows the mathematical definition of R2:94,95 n

(CGR iP − CGR M )2

∑ R2 =

i=1 n



(CGR iM − CGR M )2

(B1)

i=1 M

P

where CGR and CGR are the measured CGR and predicted CGR, respectively. CGRM represents the average of the measured CGR data. The Nash−Sutcliffe model efficiency coefficient (E2) can be employed to evaluate the predictive power of statistical and intelligent models. It is given as94,95 n



(CGR iM − CGR iP)2

i=1 n

E2 = 1 −



(CGR iM − CGR M )2

(B2)

i=1

The mean squared error (MSE) is a measure of how close a fitted line or developed model is to data points. The smaller the mean squared error the closer the fit (or model) is to the actual data. The MSE has the units squared of whatever is plotted on the vertical axis. MSE is described by the following relationship94,95 n

∑ MSE =

(CGR iM − CGR iP)2

i=1

(B3)

n

where n is the number of samples. The other two performance measures that were used in this paper to assess the effectiveness of the training and testing data include mean absolute error (MEAE) and maximum absolute error (MAAE) as follows94,95 MEAE% =

1 n

n

∑ i=1

|CGR iP − CGR iM| CGR iM

× 100

run no.

T (F)

Pd (psi)

Mw

CGR (STB/MMSCF)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55

173 253 215 261 210 203 281 250 214 177 252 240 174 179 281 224 168 171 182 270 220 231 223 173 271 199 197 235 221 208 242 276 238 198 189 217 168 268 237 189 226 171 267 217 230 232 245 209 174 219 171 252 169 278 253

5871 4524 5393 4345 5570 4154 5228 5962 4564 4276 3589 3948 3292 4108 3968 4005 5471 4590 4558 5339 3321 4880 4019 3982 4723 5615 3563 5025 5732 3564 3870 4482 5695 4497 3805 4597 4722 4224 2996 5118 4513 4126 3150 4056 3673 3578 5461 4164 3115 4107 5341 3460 5764 3935 3967

18.13 17.44 21.27 23.52 25.23 24.80 23.98 21.46 23.94 26.56 21.81 19.70 24.79 18.98 18.96 21.38 23.00 22.96 17.79 17.80 19.49 24.23 20.82 25.20 26.32 20.69 17.05 22.50 24.35 19.86 21.68 18.83 26.36 21.56 22.17 18.82 17.51 17.13 25.10 24.60 18.55 25.90 23.69 23.40 26.90 16.85 20.95 21.64 22.63 17.37 23.43 17.05 22.35 23.98 21.62

19.7 8.2 32.5 24.8 7.6 33.6 5.6 5.8 36.1 26.5 16.6 20.6 28.9 6.1 27.7 26.9 19.2 23.1 15.4 10.0 17.3 33.9 11.7 16.1 14.9 7.2 20.9 31.2 33.4 27.7 16.8 14.0 26.6 32.7 34.1 7.1 19.9 8.6 12.6 32.8 10.8 8.4 21.9 14.6 16.7 29.8 31.4 26.3 17.8 25.1 11.4 24.3 26.1 23.7 15.6

MAAE% = (B4) 3444

max|CGR iP − CGR iM| CGR iM

× 100 (B5)

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels



Article

Superscripts

AUTHOR INFORMATION

Corresponding Author

*Phone: 519-888-4567 ext. 36157. E-mail: szendehb@ uwaterloo.ca. Corresponding author address: Chemical Engineering Department, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1.



Notes

REFERENCES

(1) Fasesan S. O.; Olukink O. O.; Adewumi O. O. Characteristics of Gas Condensate. Pet. Sci. Technol. 2003, 21 (1−2), 81−90. (2) Bozorgzadeh, M.; Gringarten A. C. Condensate-Bank Characterization from Well-Test Data and Fluid PVT Properties. SPE Reservoir Eval. Eng. 2006, 9 (5), 596−611. (3) Fevang, O. Gas Condensate Flow Behaviour and Sampling. PhD Thesis, University of Trondheim, 1995. (4) Moses, P. L.; Donohoe, C. W. Gas Condensate Reservoirs. Petroleum Engineering handbook; SPE Publications: 1962; Vol. 39, pp 1−28. (5) Hosein, R.; Dawe, R. Characterization of Trinidad Gas Condensates. Pet. Sci. Technol. 2011, 29 (8), 817−823. (6) Babalola, F. U.; Susu, A. A.; Olaberinjo, F. A. Gas Condensate Reservoir Modeling, Part I: From Deterministics to Stochastics. Pet. Sci. Technol. 2009, 27 (10), 1007−1013. (7) Chowdhury, N.; Sharma, R.; Pope, G. A.; Sepehrnoori, K. A Semianalytical Method to Predict Well Deliverability in GasCondensate Reservoirs. SPE Reservoir Eval. Eng. 2008, 11 (1), 175− 185. (8) Vo, D. T.; Jones, J. R.; Raghavan, R. Performance Predictions for Gas-Condensate Reservoirs. SPE Form. Eval. 1989, 4 (4), 576−584. (9) Sadeghi Boogar, A.; Masihi, M. New Technique for Calculation of Well Deliverability in Gas Condensate Reservoirs. J. Nat. Gas Sci. Eng. 2010, 2(1), 29−35. (10) Mott, R. Engineering Calculation of Gas Condensate Well Productivity. SPE 77551, SPE Annual Technical Conference and Exhibition, San Antonio, USA, October 2002. (11) Thomas, F. B.; Andersen, G.; Bennion, D. B. Gas Condensate Reservoir Performance. J. Can. Pet. Technol. 2009, 48 (7), 18−24. (12) Li, Y.; Li, B.; Hu, Y.; Jiao, Y.; Zhu, W.; Xiao, X.; Niu, Y. Water Production Analysis and Reservoir Simulation of the Jilake Gas Condensate Field. Pet. Explor. Dev. 2010, 37 (1), 89−93. (13) Hazim, A. Performance of Wellhead Chokes during Sub-Critical Flow of Gas Condensates. J. Pet. Sci. Eng. 2008, 60 (3−4), 205−212. (14) Zarenezhad, B.; Aminian, A. An Artificial Neural Network Model for Design of Wellhead Chokes in Gas Condensate Production Fields. Pet. Sci. Technol. 2011, 29 (6), 579−587. (15) Alavi, F. S.; Mowla, D.; Esmaeilzadeh F. Production Performance Analysis of Sarkhoon Gas Condensate Reservoir. J. Pet. Sci. Eng. 2010, 75 (1−2), 45−53. (16) Pedersen, K. S.; Fredenslund, A.; Thomassen, P. Properties of Oils and Natural Gases; Gulf Publishing Company: Houston, TX, USA, 1989. (17) Whitson, C. H.; Brule, M. Phase Behavior; Monograph Series. SPE: Richardson, TX, USA, 2000. (18) Humoud, A. A.; Al-Marhoun, M. A. A New Correlation for GasCondensate Dew Point Pressure Prediction. SPE Middle East Oil Show, Bahrain, 17−20 March 2001. (19) Al-Dhamen, M.; Al-Marhoun, M. New Correlations for DewPoint Pressure for Gas Condensate. SPE Saudi Arabia Section Young Professionals Technical Symposium, Dhahran, Saudi Arabia, 14−16 March 2011. (20) Karbalaee Akbari, M.; Jalali Farahani, F.; Abdy, Y. Using Artificial Neural Network’s Capability for Estimation of Gas Condensate Reservoir’s Dew point Pressure. EUROPEC/EAGE Conference and Exhibition, London, UK, 11−14 June 2007. (21) Shokir, E. M. Dewpoint Pressure Model for Gas Condensate Reservoirs Based on Genetic Programming. Energy Fuels 2008, 22 (5), 3194−3200. (22) Young, L. C.; Stephenson, R.E. A Generalized Compositional Approach for Reservoir Simulation. SPE J. 1983, 23 (5), 727−742.

The authors declare no competing financial interest.



ACKNOWLEDGMENTS The authors would like to thank Ali Shafiei, University of Waterloo, for his help and detailed technical attention in editing the manuscript.



M = measured net = network P = predicted

NOMENCLATURE

Acronyms

AI = artificial intelligence ANN = artificial neural network BP = back propagation CCE = constant composition expansion CGR = condensate-to-gas ratio CVD = constant volume depletion EOS = equation of state FL = fuzzy logic GA = genetic algorithm MAAE = maximum absolute error MEAE = mean absolute error MSE = mean squared error Mw = molecular weight NN = neural network PSO = particle swarm optimization P-T = pressure-temperature PVT = pressure-volume-temperature Variables

Tj(k) = actual output Ptg = best ever particle position of particle i Yj(k) = expected output Pti = global best position in the swarm until iteration t Vti = velocity vector at iteration t c1, c2 = trust parameters Ci = hydrocarbon component (e.g., i = 1 refers to methane) E2 = Nash-Sutcliffe coefficient G = number of training samples k = permeability (mD) m = number of output nodes n = number of samples Mw = molecular weight Pd = dewpoint Pressure r1 , r2 = random number R2 = coefficient of determination T = reservoir temperature Xi = position of the i-th particle zi = overall composition of component i Greek Letter

ω = inertia weight ϕ = porosity

Subscripts

d = dewpoint i = particle i max = maximum min = minimum 3445

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

Volumetric Balance. SPE-104307SPE Eastern Regional Meeting, Canton, Ohio, USA, 11−13 October 2006. (46) Firoozabadi, A.; Hekim, Y.; Katz, D. L. Reservoir Depletion Calculations for Gas Condensates Using Extended Analyses in the PengRobinson Equation of State. Can. J. Chem. Eng. 1978, 56(5), 610−615. (47) Sloan, J. P. Phase Behaviour of Natural Gas and Condensate Systems. Pet. Eq. 1950, 22 (2), 54−64. (48) Boom, W.; Wit, K.; Zeelenberg, J. P. W.; Weeda, H. C.; Maas, J. G. On the Use of Model Experiments for Assessing Improved GasCondensate Mobility Under Near-Wellbore Flow Conditions. SPE 36714, SPE Annual Technical Conference and Exhibition, Denver, Colorado, USA, 6−9 October 1996. (49) van de Leemput, L. E. C.; Bertram, D. A.; Bentley, M. R.; Gelling, R. Full-Field Reservoir Modeling of Central Oman GasCondensate Fields. SPE Reservoir Eng. 1996, 11(4), 252−259. (50) Boom, W.; Wit, K.; Schulte, A. M.; Oedai, S.; Zeelenberg, J. P. W.; Maas, J. G. Experimental Evidence for Improved Condensate Mobility at Near-wellbore Flow Conditions. SPE Annual Technical Conference and Exhibition, Dallas, Texas, USA, 22−25 October 1995. (51) Katz, D. L.; Herzog, R. A.; Hekim, Y. Predicting Yield of Revaporized Condensate in Gas Storage. J. Pet. Technol. 1983, 35 (6), 1173−1175. (52) Cipolla, C. L.; Lolon, E. P.; Mayerhofer, M. J. Reservoir Modeling and Production Evaluation in Shale-Gas Reservoirs. International Petroleum Technology Conference, Doha, Qatar, 7−9 December 2009. (53) Barker, J. W.; Leibovici, C. F. Delumping Compositional Reservoir Simulation Results: Theory and Applications. SPE 51896, SPE Reservoir Simulation Symposium, Houston, Texas, USA, 14−17 February 1999. (54) Leibovici, C. F.; Barker, J. W.; Wache, D. Method for Delumping the Results of Compositional Reservoir Simulation. SPE J. 2000, 5 (2), 227−235. (55) Stenby, E. H.; Christensen, J. R.; Knudsen, K.; Leibovici, C. F. Application of a Delumping Procedure to Compositional Reservoir Simulations. SPE 36744, SPE Annual Technical Conference and Exhibition, Denver, Colorado, USA, 6−9 October 1996. (56) Weisenborn, A. J.; Schulte, A. M. Compositional Integrated Subsurface-Surface Modeling. SPE 65158, SPE European Petroleum Conference, Paris, France, 24−25 October 2000. (57) Elsharkawy, A. M.; Foda, S. G. EOS Simulation and GRNN Modeling of the Behavior of Retrograde-Gas Condensate Reservoirs. SPE 38855, SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 5−8 October 1997. (58) Hueni, G. B.; Hillyer, M. G. The Kurunda Field Tirrawarra Reservoir: A Case Study of Retrograde Gas-Condensate Production Behavior. SPE 22963, SPE Asia-Pacif ic Conference, Perth, Australia, 4−7 November 1991. (59) Morel, D. C.; Nectoux, A.; Danquigny, J. Experimental Determination of the Mobility of Hydrocarbon Liquids during Gas Condensate Reservoir Depletion: Three Actual Cases. SPE 38922, SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 5− 8 October 1997. (60) Saleh, A. M.; Stewart, G. Interpretation of Gas Condensate Well Tests with Field Examples. SPE 24719-MS, SPE Annual Technical Conference and Exhibition, Washington, D.C., USA, 4−7 October 1992. (61) Ping, G.; Yuanzhong, C.; Liping, L.; Jianfen, D.; Hong, L.; Jianyi, L.; Shilun, L.; Chang, S. Experimental Studies on Injecting Wastewater to Improve the Recovery of Abandon Condensate Gas Reservoir. SPE 80517-MS, SPE Asia Pacif ic Oil and Gas Conference and Exhibition, Jakarta, Indonesia, 9−11 September 2003. (62) Briones, M.; Zambrano, J. A.; Zerpa, C. Study of Gas-Condensate Well Productivity in Santa Barbara Field, Venezuela, by Well Test Analysis. SPE 77538-MS, SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 29 September−2 October 2002. (63) Bazzara, H.; Biasotti, H.; Romero, A.; Crovetto, M.; Pinheiro, C. Reducing Swelling Clay Issues: Drilling with WBM Cuts Trip Time by 66%, Chihuido de La Salina Field, Argentina. SPE 115765-MS, SPE Annual Technical Conference and Exhibition, Denver, Colorado, USA, 21−24 September 2008.

(23) Luo, K.; Li, Sh.; Zheng, X.; Chen, G.; Liu, N.; Sun, W. Experimental Investigation into Revaporization of Retrograde Condensate. SPE Production and Operations Symposium, Oklahoma City, Oklahoma, USA, 24−27 March 2001. (24) Jokhio, S. A.; Tiab, D.; Escobar, F. Forecasting Liquid Condensate and Water Production in Two-Phase and Three-Phase Gas Condensate Systems. SPE Annual Technical Conference and Exhibition, San Antonio, TX, USA, 29 September−2 October 2002. (25) Fevang, Ø.; Whitson, C. H. Modeling Gas-Condensate Well Deliverability. SPE Reservoir Eng. 1996, 11 (4), 221−230. (26) Al-Farhan, A.; Ayala, H.; Luis F. Optimization of Surface Condensate Production from Natural Gases Using Artificial Intelligence. J. Pet. Sci. Eng. 2006, 53 (1−2), 135−147. (27) Qul, X.; Feng, J.; Sun, W. Parallel Genetic Algorithm Model Based on AHP and Neural Networks for Enterprise Comprehensive Business. Intelligent Information Hiding and Multimedia Signal Processing, Harbin, 15−17 August 2008. (28) Reed, R. Pruning Algorithms-a Survey. IEEE Trans. Neural Networks 1993, 4, 740−747. (29) Tang, P.; Xi, Z. The Research on BP Neural Network Model Based on Guaranteed Convergence Particle Swarm Optimization. Second International Symposium on Intelligent Information Technology Application, IITA ’08, Shanghai, China, 20−22 December 2008. (30) Hornik, K.; Stinchcombe, M.; White, H. Universal Approximation of an Unknown Mapping and its Derivatives Using multilayer Feed Forward Networks. Neural Networks 1990, 3(5), 551−60. (31) Garcia-Pedrajas, N.; Hervas-Martinez, C.; Munoz-Perez, J. COVNET: A Cooperative Co- Evolutionary Model for Evolving Artificial Neural Networks. IEEE Trans. Neural Networks 2003, 14, 575−596. (32) Hornick, K.; Stinchcombe, M.; White, H. Multilayer Feed Forward Networks Are Universal Approximators. Neural Networks 1989, 2, 359−366. (33) England W. A. Empirical Correlations to Predict Gas/Gas Condensate Phase Behavior in Sedimentary Basins. Org. Chem. 2002, 33 (6), 665−673. (34) El-Banbi, A. H.; Forrest, J. K.; Fan, L.; McCain, W. D. Producing Rich-Gas-Condensate Reservoirs-Case History and Comparison between Compositional and Modified Black-Oil Approaches. SPE 58988, SPE Fourth International Petroleum Conference and Exhibition, Villahermosa, Mexico, 1−3 February 2000. (35) El-Fattah, K. A. Volatile Oil and Gas Condensate Fluid Behavior for Material Balance Calculations and Reservoir Simulation. Ph.D. Thesis, Cairo University, 2005. (36) McVay, D. A. Generation of PVT Properties for Modified Black-Oil Simulation of Volatile Oil and Gas Condensate Reservoirs. Ph.D. Thesis, Texas A&M University, 1994. (37) Walsh, M. P.; Towler, B. F. Method Computes PVT Properties for Gas Condensate. Oil Gas J. July 1994, 83−86. (38) Fattah, K. A.; El-Banbi, A.; Sayyouh, M. H. New Correlations Calculate Volatile Oil, Gas Condensate PVT Properties. Oil Gas J. May 2009, 45−53. (39) Hagan, M. T.; Demuth, H. B.; Beal, M. Neural Network Design; PWS Publishing Company: Boston, 1966. (40) Brown, M.; Harris, C. Neural Fuzzy Adaptive Modeling and Control; Prentice-Hall: Englewood Cliffs, NJ, 1994. (41) Roussennac, B. Gas Condensate Well Test Analysis. M.Sc. Thesis, Stanford University, 2001; p 121. (42) Dake, L. P. The Practice of Reservoir Engineering; Elsevier: ISBN 13: 978-0-444-50671-9, Revised Version, 2001. (43) Moses, P. L.; Donohoe, C. W. Gas Condensate Reservoirs. SPE Handbook; 1962; Chapter 39. (44) Cho, S. J.; Civan, F.; Starling, K. E. A Correlation to Predict Maximum Condensation for Retrograde Condensation Fluids and its Use in PressureDepletion Calculations. SPE 14268, SPE Annual Technical Conference and Exhibition, Las Vegas, Nevada, USA, 22−26 September 1985. (45) Olaberinjo, A. F.; Oyewola, M. O.; Adeyanju, O. A.; Alli, O. A.; Obiyemi, A. D.; Ajala, S. O. KPIM of Gas/Condensate Productivity: Prediction of Condensate/Gas Ratio (CGR) Using Reservoir 3446

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447

Energy & Fuels

Article

(64) Kleinsteiber, S. W.; Wendschlag, D. D.; Calvin, J. W. A Study for Development of a Plan of Depletion in a Rich Gas Condensate Reservoir: Anschutz Ranch East Unit, Summit County, Utah, Uinta County, Wyoming. SPE 12042, SPE Annual Technical Conference and Exhibition, San Francisco, California, 5−8 October 1983. (65) Kenyon, D.; Behie, G. A. Third SPE Comparative Solution Project: Gas Cycling of Retrograde Condensate Reservoirs. J. Pet. Technol. 1987, 39(8), 981−997. (66) Fang,Y.; Li, B.; Hu, Y.; Sun, Z.; Zhu, Y. Condensate Gas Phase Behavior and Development. SPE International Oil and Gas Conference and Exhibition, Beijing, China, 2−6 November 1998. (67) Siregar, S.; Hagoort, J.; Ronde, H. Nitrogen Injection vs. Gas Cycling in Rich Retrograde Condensate-Gas Reservoirs. International Meeting on Petroleum Engineering, Beijing, China, 24−27 March 1992. (68) Al-Anazi, H. A.; Solares, J. R.; Al-Faifi, M. G. The Impact of Condensate Blockage and Completion Fluids on Gas Productivity in Gas-Condensate Reservoirs. SPE Asia Pacif ic Oil and Gas Conference and Exhibition, Jakarta, Indonesia, 5−7 April 2005. (69) Moradi, B.; Tangsirifard, J.; Rasaei, M. R.; Momeni Maklavani, A.; Bagheri, M. B. Effect of Gas Recycling on the Enhancement of Condensate Recovery in an Iranian Fractured Gas Condensate Reservoir. Trinidad and Tobago Energy Resources Conference, Port of Spain, Trinidad, 27−30 June 2010. (70) Eikeland, K. M.; Helga, H. Dry Gas Reinjection in a Strong Waterdrive Gas-Condensate Field Increases Condensate RecoveryCase Study: The Sleipner Øst Ty Field, South Viking Graben, Norwegian North Sea. SPE Reservoir Eval. Eng. 2009, 12(2), 281−296. (71) Tan, T. B. Oil-in-Place of Gas Condensate Reservoirs Discovered at Dewpoint Conditions. SPE Annual Technical Meeting, Calgary, Alberta, Canada, 8−10 June 1998. (72) Wheaton, R. J.; Zhang, H. R. Condensate Banking Dynamics in Gas Condensate Fields: Compositional Changes and Condensate Accumulation around Production Wells. SPE Annual Technical Conference and Exhibition, Dallas, TX, USA, 1−4 October 2000. (73) Ibrahim, M. Accurate Production Allocation Method for Condensate Gas Reservoir. Canadian International Petroleum Conference, Calgary, Alberta, Canada, 17−19 June 2008. (74) Salam, E. A.; Mucharam, L.; Abdassah, D.; Parhusip, B.; Panjaitan, P. R. Modification of the CVD Test Approach and its Application for Predicting Gas Deliverability from Gas-Condensate Reservoir of Talang Akar Formation in Indonesia. SPE Gas Technology Symposium, Calgary, Alberta, Canada, 28 April−1 May 1996. (75) Luo, K.; Li, Sh.; Zheng, X.; Chen, G.; Dai, Z.; Liu, N. Experimental Investigation into Revaporization of Retrograde Condensate by Lean Gas Injection. SPE Asia Pacif ic Oil and Gas Conference and Exhibition, Jakarta, Indonesia, 17−19 April 2001. (76) Barnum, R. S.; Brinkman F. P.; Richardson, T. W.; Spillette, A. G. Gas Condensate Reservoir Behaviour: Productivity and Recovery Reduction Due to Condensation. SPE Annual Technical Conference and Exhibition, Dallas, TX, USA, 22−25 October 1995. (77) Schebetov, A.; Rimoldi, A.; Piana, M. Quality Check of GasCondensate PVT Studies and EOS Modelling under Input Data Uncertainty. SPE Russian Oil and Gas Conference and Exhibition, Moscow, Russia, 26−28 October 2010. (78) Al-Shaidi, S. M.; Perriman B. J.; Vogel, K. Managing Uncertainties in Oman Liquified Natural Gas Gas-Condensate Reservoir Development. SPE Annual Technical Conference and Exhibition, Dallas, TX, USA, 1−4 October 2000. (79) Al-Honi, M.; Zekri, A. Y. Determination of Recovery and Relative Permeability for Gas Condensate Reservoirs. Abu Dhabi International Conference and Exhibition, Abu Dhabi, United Arab Emirates, 10−13 October 2004. (80) App, J. F.; Burger, J. E. Experimental Determination of Relative Permeabilities for a Rich Gas Condensate System Using Live Fluid. SPE Reservoir Eval. Eng. 2009, 12(2), 263−269. (81) Sutton, R. P. Fundamental PVT Calculations for Associated and Gas Condensate Natural-Gas Systems. SPE Reservoir Eval. Eng. 2007, 10 (3), 270−284.

(82) Tarek, A.; Evans, J.; Reggie, K.; Tom, V. Wellbore Liquid Blockage in Gas-Condensate Reservoirs. SPE Eastern Regional Meeting, Pittsburgh, Pennsylvania, USA, 9−11 November 1998. (83) Carlson, M. R.; Cawston, W. B. Obtaining PVT Data for Very Sour Retrograde Condensate Gas and Volatile Oil Reservoirs: A Multidisciplinary Approach. SPE Gas Technology Symposium, Calgary, Alberta, Canada, 28 April−1 May 1996. (84) Chen, H. L.; Wilson, S. D.; Monger-McClure, T. G. Determination of Relative Permeability and Recovery for North Sea Gas-Condensate Reservoirs. SPE Reservoir Eval. Eng. 1999, 2(4), 393−402. (85) Shi, C.; Horne, R. N. Improved Recovery in Gas-Condensate Reservoirs Considering Compositional Variations. SPE Annual Technical Conference and Exhibition, Denver, Colorado, USA, 21−24 September 2008. (86) McCulloch, W.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 7, 115−133. (87) Bain, A. Mind and Body: The Theories of Their Relation; D. Appleton and Company: New York, 1873. (88) James, W. The Principles of Psychology; H. Holt and Company: New York, 1890. (89) Vallés, H. R. A Neural Networks Method to Predict Activity Coefficients for Binary Systems Based on Molecular Functional Group Contribution, M.Sc. Thesis, University of Puerto Rico, 2006. (90) Eberhart, R. C.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. Proc. 6th Symp. Micro Machine and Human Science, 1995; 39−43. (91) Eberhart, R. C.; Simpson, P. K.; Dobbins, R. W. Computational Intelligence PC Tools; Academic Press Professional: Boston, 1996. (92) Coulibaly, P.; Baldwin, C. K. Nonstationary Hydrological Time Series Forecasting Using Nonlinear Dynamic Methods. J. Hydrol. 2005, 174−307. (93) Shi, Y.; Eberhart, R. C. A Modified Particle Swarm Optimizer. Proceedings of IEEE International Conference on Evolutionary Computation 1988, 69−73. (94) Montgomery, D. C.; Runger, G. C. Applied Statistics and Probability for Engineers, Student Solutions Manual; John Wiley and Sons: New York, 2006. (95) Montgomery, D. C. Introduction to Statistical Quality Control; John Wiley and Sons: New York, 2008.

3447

dx.doi.org/10.1021/ef300443j | Energy Fuels 2012, 26, 3432−3447