Discrete Modeling: Thermodynamics Based on Shannon Entropy and

Apr 14, 2015 - Thermodynamic modeling using the concept of Shannon entropy is a promising approach, especially in the field of describing fluid-phase ...
1 downloads 4 Views 805KB Size
Article pubs.acs.org/IECR

Discrete Modeling: Thermodynamics Based on Shannon Entropy and Discrete States of Molecules Martin Pfleger,*,† Thomas Wallek,† and Andreas Pfennig†,‡ †

Institute of Chemical Engineering and Environmental Technology, NAWI Graz, Graz University of Technology, Inffeldgasse 25/C/I, 8010 Graz, Austria ‡ Laboratoire de Génie Chimique, Université de Liège, 4000 Liège, Belgium S Supporting Information *

ABSTRACT: Thermodynamic modeling using the concept of Shannon entropy is a promising approach, especially in the field of describing fluid-phase behavior. This paper introduces the method of discrete modeling, using the ideal-gas model as an illustrative example, and derives a general equation of state. Discrete modeling is based on discrete states of individual molecules. It utilizes the special characteristics of Shannon entropy to model the statistical behavior of systems by applying the maximum entropy principle to its constituents in a straightforward manner. The presented method and the general form of the equation of state thus obtained allow the derivation of equations of states for real fluids. As a novelty, it also allows for the description of the microscopic distribution of the mechanical states of individual molecules. Considering the kinetic states of the particles this includes the Maxwell−Boltzmann distribution, the caloric equation of state, and the heat capacity of the ideal gas.

1. INTRODUCTION The concept of Shannon information, originally developed for the optimization of transmitting communications,1 has successfully been applied in various areas of science, such as statistical mechanics,2 operations research,3 statistics, economics, and urban planning.4,5 Quite recently, efforts were made to link the concept of Shannon information with thermodynamic entropy, and to establish equality of information and entropy.6,7 Little has been published, however, beyond basic information drawing analogies between both concepts.8 In particular, Shannon information has not yet been included in thermodynamic modeling, although it allows for the clear and unambiguous interpretation of thermodynamic entropy, beyond the scope of concepts used until now.9 The aim of this paper is to illustrate that thermodynamic modeling based upon Shannon information, labeled as “discrete modeling”, is a promising approach that provides more detailed molecular information that can be included in thermodynamic models. This opens up a diverse range of applications in modeling fluidphase behavior. An illustration of the concept of discrete modeling begins with Gibbs’ fundamental equation for systems in thermodynamic equilibrium S = f (N , U , V )

⎛ ∂S(N , U , V ) ⎞ P ⎜ ⎟ ≡ ⎝ ⎠U , N ∂V T

(4)

F (N , U , V , T ) = 0

(5)

By an additional application of eq 4, the thermal equation of state can be derived: F (N , P , V , T ) = 0

(6)

For systems not in thermodynamic equilibrium, entropy may vary, even for the same values of N, U, and V. However, and this is one of the main statements of the second law of thermodynamics, every system tends toward an equilibrium state, which is characterized by a maximum value of entropy with respect to the constraints N, U, and V. Hence, for systems in thermodynamic equilibrium, such as those exclusively considered in this work, entropy is indeed fully defined by eq 2. It is important to stress this seemingly obvious fact, because this maximum entropy principle will be applied to derive the desired entropy function. The main idea of deriving the entropy function, eq 2, is to express the four functions S, N, U, and V with a common set of microscopic variables xi:

where S is the thermodynamic entropy, N is the particle number, U is the internal energy and V is the system volume. Throughout this paper, extensive system variables are used, which are indicated by capital letters. Equation 1 assumes that entropy is a function, not otherwise specified, that depends on the three extensive system variables. However, given the explicit function

Received: Revised: Accepted: Published:

(2)

and by applying the well-known definitions © 2015 American Chemical Society

(3)

where pressure is P and temperature is T, one can derive the caloric and thermal equations of state. Combining eqs 3 and 1 yields the caloric equation of state in the form

(1)

S = S(N , U , V )

⎛ ∂S(N , U , V ) ⎞ 1 ⎜ ⎟ ≡ ⎝ ⎠V , N ∂U T

4643

December 21, 2014 March 20, 2015 March 27, 2015 April 14, 2015 DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research S N U V

= = = =

S(xi) N (xi) U (xi) V (xi)

performed over all possible states, and H can formally be written as a function of the probability distribution:

(7) (8) (9) (10)

p ̅ = {pi },

i = 1 ... m

(12)

H(p ̅ ) = −∑ pi ln pi

(13)

i

In compliance with the maximum entropy principle, the variables xi must be determined in a way that maximizes the entropy, eq 7, while considering the constraints, eqs 8, 9, and 10. Applying Lagrange’s method of undetermined multipliers results in

In this paper, H as defined in eq 13 is termed the Shannon entropy of the system. The range of H(p)̅ is given by 0 ≤ H(p)̅ ≤ ln m. The zero value is for distributions where one of the pi equals 1 and, because of eq 11, all other pi are zero. The maximum value results for uniformly distributed states: 1 pi = , i = 1 ... m ⇒ H(p ̅ ) = ln m (14) m

S = S(N (xmax, i), U (xmax, i), V (xmax, i))

where xmax,i denotes the value of the microscopic variables that maximize the entropy. The crucial point is to find a set of variables xi that are capable of representing the thermodynamic functions. This can be achieved by introducing the concept of Shannon entropy to express the thermodynamic entropy by means of occupation numbers. In this way, the explicit entropy function and the desired equation of state as well as the set of xmax,i are obtained, providing detailed knowledge about the microscopic structure of the considered system. The set of the xmax,i does not define a specific microstate, that is, detailed knowledge about each particle’s state, but refers to the occupation numbers of the μ-space, indicating the number of particles in each of their possible states. In this paper, the relevant properties of Shannon entropy are reviewed in section 2, the concept of Shannon entropy is applied to thermodynamic entropy of multiparticle systems in section 3, and the required system of equations is presented and discussed in section 4. An essential result is that a very general formulation of an equation of state can be derived. Section 5 introduces the appropriate occupation numbers for the kinetic system. The extremization yields the Maxwell− Boltzmann distribution, the caloric equation of state, and the heat capacity of an ideal gas. The explicit evaluation steps taken using Lagrange’s method are provided in the Supporting Information. Section 6 introduces the occupation numbers and performs the extremization of the potential system, resulting in the ideal gas equation. Section 7 summarizes the method of discrete modeling and draws conclusions about applying this approach to describe nonideal fluid-phase behavior.

2.2. Consistency Property. For convenience, the m possible states Ai with their corresponding probabilities pi may be organized in g groups which are termed Bk with k = 1··· g, for example, {Bk } = {{A1, A 2} , {A3 , A4 , A5 , A 6} , ..., {A m − 2 , A m − 1, A m} }    B1

lk



wk =

pi

i = lk − 1+ 1

with k

lk =

∑ gl l=1

where gl is the number of elementary states gathered in group Bl. Each of the elementary states Ai belongs to one and only one group Bk and, therefore, the set of all wk can also be regarded as a probability distribution, which is denoted as w̅ = {wk}, with

k = 1 ... g

g

∑ wk = 1 k=1

The Shannon entropy of this probability distribution is then

m

g

H(p1 , p2 , ..., pi , ..., pm ) = −K ∑ pi log pi

H(w̅ ) = − ∑ wk ln wk

i=1

k=1

in which pi designates the system’s probability to reside in state Ai. These probabilities obey the normalization condition

(15)

For each of the groups, probability distributions can be defined. Given that the system resides in one of the elementary states of group Bk, the probability that the system resides in the elementary state Ai of this group with i = lk−1+1...lk is described by p qk , i = i wk

m i=1

Bg

In this example, the number of elementary states for group B1 is g1 = 2, for group B2 it is g2 = 4, and so on. The probability wk that the system resides in one of the states of group Bk is given by

2. SHANNON ENTROPY 2.1. Basic Properties. Shannon1 defines the amount of information inherent in a system residing in either of m possible states Ai, where 1 ≤ i ≤ m, by

∑ pi = 1

B2

(11)

The arbitrary constant K and the basis of the logarithm account for the scaling of H. Many authors have discussed the basic properties of this measure and its relation to thermodynamic entropy and statistical mechanics.1,2,6,7,10−20 Throughout this paper the constant K has been set to K = 1 and the natural logarithm has been chosen. The summation must always be

lk



qk , i = 1,

∀ k = 1 ... g

i = lk − 1+ 1

4644

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research and, hence, the Shannon entropy of group Bk can be defined by

The state i of the compound system is a combination of the states j and l of the two subsystems. With pj,l, the probability of the compound system when subsystem 1 is in state j and subsystem 2 is in state l, one can also write

lk



H( qk ) = −

qk , i ln qk , i

i = lk − 1+ 1

(16)

m1

This can be written in a more convenient way by using the index j ≡ i−lk−1 and taking into account that lk − lk−1 = gk:

j=1 l=1

gk

H( qk ) = −∑ qk , j ln qk , j j=1

g

∑ wkH( qk) k=1

(18)

When groups Bk gather elementary states Ai with same probabilities pi, then: 1 qk , i = gk and with eq 17 the Shannon entropy of group Bk becomes

H( qk ) = ln gk

(19)

Inserting eqs 15 and 19 into eq 18 results in g

H(p ̅ ) = − ∑ wk ln k=1

wk gk

(20)

Equation 20 allows the calculation of the Shannon entropy by employing the probabilities of the groups and the number of elementary states in each group. gk is called the degeneracy factor. 2.3. Compound Systems. A system is called a compound system if its state depends on the states of several single systems or subsystems. A very illustrative example of a compound system might be two dice that have been thrown together. The state of this compound two-dice-system is defined by the combination of the states of each die, which results in 36 possible states when playing with two six-sided dice. Of course, a compound system can also consist of “different” subsystems such as, for example, a die and a coin. The crucial fact is that both subsystems are characterized by their individual probability distributions, for example q̅ and r:̅ q ̅ = {qj}, j = 1 ... m1 (21) r ̅ = {rl},

l = 1 ... m2

p1 = {p1,1 , ..., p1, m } 1

p2 = {p2,1 , ..., p2, m }

(22)

2

For each subsystem, the corresponding Shannon entropy is defined as

⋮ pN = {pN ,1 , ..., pN , m }

m1

H1 = H(q ̅ ) = −∑ qj ln qj j=1

N

where m1 is the number of possible states of the first subsystem, and so on. For each probability distribution, the corresponding Shannon entropy is

(23)

m2

H2 = H( r ̅ ) = −∑ rl ln rl l=1

(24)

H1 = H(p1 )

The number of possible states of the compound system is mc = m1 · m2, and the corresponding Shannon entropy Hc is

H2 = H(p2 )

mc

Hc = −∑ pi ln pi i=1

(26)

Some Remarks on Terminology. Instead of speaking of m possible states in which a system can exist, one can also consider the system as characterized by a random variable, which can take on one of m possible values. The probability distribution defines the probabilities of the corresponding values of each random variable. Similarly, a compound system can be characterized by several random variables, where each random variable is characterized by its own probability distribution and Shannon entropy. In the next chapter, the terminology subsystems will be used for the particles of a fluid. The state of the whole fluid depends on the states of its subsystems, that is, its atoms and/or molecules. A single particle, however, an atom or molecule, can also be considered a compound system. For the reason of clarity, the term random variables will be used to define the state of the single particle. In the context of compound systems, however, the term subsystems has essentially the same meaning as the term random variables. Independent Subsystems. Two subsystems are called independent if the probability distribution of the first subsystem, that is, the set of the probabilities for possible states of the first subsystem, does not depend on the actual state of the second subsystem and vice versa. With reference to the examples of the dice, this means that after throwing the dice the knowledge of the state of the first die does not reveal anything about the state of the second die. Independency does not, however, say that the two dice do not interact. On the contrary, when thrown simultaneously, they may touch each other, butand this is importantin an unpredictable manner. The probabilities of the possible states of both dice will be, therefore, the same whether they are thrown simultaneously or one after the other. Next a system composed of N independent subsystems (or: characterized by N random variables) will be considered. Each subsystem can reside in one of m states. The corresponding probability distributions of each of the N subsystems are termed

(17)

With expressions 13, 15, and 17 the consistency property17 can be written in the following way: H(p ̅ ) = H(w̅ ) +

m2

Hc = −∑ ∑ pj , l ln pj , l

⋮ HN = H(pN )

(25) 4645

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research

• the subsystems are independent, whether a maximum entropy principle is applied or not, or • the systems considered follow a maximum entropy principle; no additional constraint for the probability distributions exists, except the normalization condition, eq 11, or • the considered systems follow a maximum entropy principle and the constraints for the probability distributions of both the single and the compound system are defined homogeneously, as described above. In the case of the ideal gas that will be modeled in the following sections, the first condition is fulfilled and the usage of eqs 27 and 28 is, thereby, justified.

Because of the independent nature of the subsystems, the Shannon entropy of the compound system, Hc, is the sum of the Shannon entropies of the corresponding subsystems:8 Hc = H1 + H2 + ... + HN

(27)

If all the subsystems are characterized by the same probability distribution, for example ps̅ , where the subscript s indicates the single systems, i.e., p1 = p2 = ... = pN ≡ ps

then the Shannon entropy is homogeneous: Hc = N ·Hs

(28)

3. DISCRETE STATES OF A MULTIPARTICLE SYSTEM 3.1. The Mechanical State of a Single Particle. Generally, in the sense of classical mechanics, the mechanical state of a single particle of a gas or fluid is determined by its velocity and positional vectors. The velocity vector determines its kinetic energy, and the positional vector, its potential energy. The potential energy arises from the interaction forces between the considered particle and all other particles and depends therefore on the position of all other particles. So in contrast to the kinetic energy the potential energy can only be seen in the context of the whole gas or fluid. Nonetheless, when considering the state of the single particle, it can be treated as a compound system whose state depends on its kinetic and its potential state. The kinetic state itself depends on three random variables, that is, the three components of the particle’s velocity vector v,⃗ and the probability distribution of the kinetic state, which is termed as

Hence, if the probability distributions of the subsystems are known, and the subsystems are verifiably independent, the Shannon entropy of the corresponding compound system obeys the homogeneity relation 28. Equations 27 and 28 are used to calculate the entropy of a compound system by means of its constituent subsystems, which is the essential goal of discrete modeling: to calculate the properties of a system by using of discrete properties of its constituents. 2.4. Shannon Entropy and Thermodynamic Entropy. As illustrated in section 1, the thermodynamic entropy of the system is defined by applying the maximum entropy principle to an entropy function that depends on occupation numbers. In a previous paper, it was shown that Shannon entropy and thermodynamic entropy are equivalent concepts.8 Therefore, Shannon entropy and its properties with reference to compound systems, especially the additivity and homogeneity relations eqs 27 and 28, can be adopted toward this goal. However, eqs 27 and 28 were deduced for subsystems whose probability distributions were given. This is the case if the probability distribution is an inherent attribute of the considered system; for example, the probability distribution for the results of throwing a die is a unique, unchangable property of the die considered and will always be the same. In contrast, for multiparticle systems like gases or fluids, the considered probability distribution is not an a priori given property, but depends on constraints such as temperature and pressure. In this case, the maximum entropy principle has to be applied to find the specific probability distribution for the specified conditions, and the resulting entropy will consequently also depend on the constraints. Moreover, these constraints have to be defined before calculating the entropy. The question “Will homogeneity, eq 28, be fulfilled by the system?” is not complete, therefore, because no information about the constraints for the single and the compound system was given. The correct question is “How have the constraints of a single system and of the compound system to be related, so that homogeneity of entropy is fulfilled?” The answer was given in a previous paper8 and seems very intuitive: The constraints must also be homogeneously related, for example, when the constraint for a single subsystem is u and the Shannon entropy after applying the maximum entropy principle is Hs. If the corresponding constraint U for the compound system that consists of N subsystems is U = N·u, then the Shannon entropy Hc of the compound system is given by the homogeneity relation, Hc = N·Hs, after applying the maximum entropy principle. To conclude, any of the following conditions must be fulfilled to ensure the homogeneity of the Shannon entropy:

w̅ = {w j , j , j } 1 2 3

(29)

The figures j1, j2, and j3 are not the values of the vector components, but discrete indices by which the vector is defined. Of course, the components of the velocity vector are actually continuous variables; nevertheless, they are discretized in this manner in order to use the discrete formulation of the Shannon entropy. This artificethe discretization of continuous variablesis a proven method that allows the representation of the Shannon entropy of continuous measures within the scope of classical physics and is eponymous with discrete modeling. The Shannon entropy of the kinetic subsystem can now be written as Hs,kin = −∑ ∑ ∑ w j , j , j ln w j , j , j 1 2 3

j1

j2

1 2 3

j3

(30)

The index s indicates that a single particle is considered to be a constituent of a multiparticle system. Similarly, the probability distribution of the potential state is termed as q ̅ = {ql , l , l }

(31)

1 2 3

where l1, l2, and l3 define the discretized positional vector. The corresponding Shannon entropy of the potential subsystem is Hs,pot = −∑ ∑ ∑ ql , l l1

l2

l3

1 2 , l3

ln ql , l

1 2 , l3

(32)

If a single particle is considered part of a multiparticle system its kinetic energy does not depend on its potential energy: knowledge of its position provides no additional information 4646

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research

two constraints, which correspond to the kinetic and potential states:

about its kinetic state and vice versa. This independence of the kinetic and potential states is also unmistakably supported by the literature.21 Therefore, considering additivity, eq 27, the Shannon entropy Hs of a single particle is the sum of the Shannon entropies of its kinetic and potential subsystems:

Hs = Hs,kin + Hs,pot

(33)

l2

l3

V = V (Q k )

4. EXTREMIZATION OF THE ENTROPY 4.1. General Considerations. The following system of equations combines eqs 40 to 44, in order to discuss some general properties of the resulting thermal equation of state:

1 2 3

ln ql , l

1 2 , l3

(37)

3.3. Occupation Numbers. For convenience, the three indices j1, j2, and j3 are replaced by one index i, and l1, l2, and l3 are replaced by the index k: wi ≡ w j , j , j (38) 1 2 3

qk ≡ ql , l

1 2 , l3

(39)

The huge number of atoms and/or molecules in a fluid justifies the introduction of the occupation numbers Wi and Qk in eqs 36 and 37, in which the probabilities are set equal to the relative occupation numbers: wi = qk =

S = S kin(Wi ) + Spot(Q k)

(45a)

N = N (Wi ) N = N (Q k )

(45b) (45c)

U = Ukin(Wi ) + Upot(Q k)

(45d)

V = V (Q k )

(45e)

Entropy S is the target function, which has to be maximized by applying Lagrange’s method of undetermined multipliers. Four constraints, comprising two for the particle numbers, one for the internal energy, and one for the system volume, are taken into account. The microscopic variables, the occupation numbers Wi and Qk, have to be varied to achieve maximum entropy. Hence, Lagrange’s method yields (i) the distribution of the kinetic states, which will result in the Maxwell− Boltzmann distribution, as shown in section 5, (ii) the distribution of the potential states, i.e., the spatial distribution of the particles; without further constraints, this will provide the homogeneous and isotropic distribution of the particles, as shown in section 6, (iii) and the desired relation S = S(N, U, V), which can be used to derive the equation of state along with any other thermodynamic function of interest. The Lagrange function 3 (Wi,Qk) links the target function and the constraints by means of the four Lagrangian multipliers λ1, ..., λ4:

Wi N Qk N

where Wi = Nwi, the number of particles with the velocity vector vi⃗ , and Qk = Nqk, the number of particles with the positional vector rk⃗ . It follows that S = S kin(Wi ) + Spot(Q k)

(44)

This seems highly plausible, because the occupation numbers Qk clearly define the volume that is occupied by the particles. Hence, the occupation numbers Ni and Qk are the desired microscopic variables and are introduced as the xi in eqs 7−10.

(36) 1 2 , l3

(43)

The system volume V can be expressed by the occupation numbers Qk, assuming a relationship between the particle number N(Qk), the potential energy Upot(Qk), and the volume V:

S kin = −NkB ∑ ∑ ∑ w j , j , j ln w j , j , j

l1

∑ upot,k

U = Ukin(Wi ) + Upot(Q k)

with

Spot = −NkB ∑ ∑ ∑ ql , l

∑ ukin,i

and

S = S kin + Spot

1 2 3

(42)

k

With reference to eq 34, this yields the significant result that thermodynamic entropy can be split into two parts, which are denoted as the kinetic and the potential entropy terms:

j3

∑ Qk

Upot(Q k) =

(35)

j2

N=

i

As shown in a previous paper,8 the thermodynamic entropy of the considered fluid is proportional to its Shannon entropy, with the Boltzmann constant kB as the proportional constant:

j1

(41)

Ukin(Wi ) =

(34)

S = kBH

∑ Wi

Each velocity vector, vi⃗ , and each positional vector, rk⃗ , can be assigned a kinetic and potential energy, ukin,i and upot,k, respectively, so that the internal energies can now be formulated as

3.2. The Multiparticle System. The fluidum considered can be treated as a compound system, which comprises a huge number of subsystems, that is, atoms and molecules. Provided that all particles behave statistically in the same way, each particle is characterized by the same probability distribution, and the particles can be treated as independent subsystems. A detailed analysis and justification for this independency can be derived from the maximum entropy principle, and guarantees homogeneity.8 Equations 28 and 33 can, therefore, be combined to Hc = NHs,kin + NHs,pot

N=

(40)

The occupation numbers Wi and Qk are suitable for use as independent variables for the constraints N, U, and V: The normalization conditions ∑wi = 1 and ∑qk = 1 are replaced by 4647

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research ∂Spot(Q k)

3(Wi , Q k) = S(Wi , Q k) − λ1(N − N (Wi ))

∂Q k

− λ 2(N − N (Q k)) − λ3(U − (Ukin(Wi ) + Upot(Q k))) − λ4(V − V (Q k))

⎛ ∂V (Q ) ⎞ k ⎟⎟ = 0 + λ4⎜⎜ ⎝ ∂Q k ⎠

(46)

where N, U, and V designate values of the corresponding functions N(Wi), N(Qk), U(Wi, Qk), and V(Qk). The Lagrangian function 3 depends on the variables Wi and Qk, whereas the Lagrangian multipliers are constants which will be determined by the subsequent maximization procedure. For this purpose, the derivatives are set to zero: ∂3(Wi , Q k) ∂Wi

∂3(Wi , Q k) ∂Q k

=0

∀i

=0

∀k (47b)

∀i

S kin = S kin(N , Ukin)

(51a)

Spot = Spot(N , Upot , V )

(51b)

⎛ ∂S(N , U , V ) ⎞ 1 ⎜ ⎟ ≡ ⎝ ⎠V , N ∂U T

(48a)

⎛ ∂N (Q ) ⎞ ⎛ ∂Upot(Q ) ⎞ ∂Spot(Q k) k k ⎟⎟ + λ3⎜⎜ ⎟⎟ + λ 2⎜⎜ ∂Q k ⎝ ∂Q k ⎠ ⎝ ∂Q k ⎠

According to the Lagrange formalism, this is given by the third Lagrangian multiplier (cf. Equation 46):

∀k (48b)

λ3 = −

The third Lagrangian multiplier, λ3, is related to the constraint of the internal energy U, which is the only constraint that depends on both sets of occupation numbers (cf. with eq 45d). The derivatives, with respect to the occupation numbers Wi, let all terms containing only occupation numbers Qk vanish, resulting in the set of eqs 48a, in which the terms only depend on occupation numbers Wi. For the same reason, the terms in the set of eqs 48b depend only on occupation numbers Qk. However, the third Lagrangian multiplier, λ3, is the same for all equations. Therefore, the system (45) can be split into two systems of equations, which can be handled separately: The kinetic system consists of the kinetic terms that depend on the Wi in eqs (45) and 48a: S kin = S kin(Wi )

(49a)

N = N (Wi )

(49b)

Ukin = Ukin(Wi )

(49c)

⎛ ∂N (Wi ) ⎞ ⎛ ∂U (W ) ⎞ ∂S kin(Wi ) + λ1⎜ ⎟ + λ3⎜ kin i ⎟ = 0 ∂Wi ⎝ ∂Wi ⎠ ⎝ ∂Wi ⎠

(50e)

Now, two independent solutions for two different energetic constraints, Ukin and Upot, exist although only one energetic constraint, U, is actually given. Even when considering U = Ukin + Upot, both terms are not defined unambigously. This ambiguity is resolved by applying the common third Lagrangian multiplier λ3 in eqs (48), and will link both solutions (51) to the desired relationship S = S(N,U,V). 4.2. The Thermal Equation of State. Thermodynamic temperature is defined by eq 3

which gives the following system of equations:

⎛ ∂V (Q ) ⎞ k ⎟⎟ = 0 + λ4⎜⎜ ⎝ ∂Q k ⎠

∀k

The kinetic system, eqs (49), and the potential system, eqs (50), will yield the two relations

(47a)

⎛ ∂N (Wi ) ⎞ ⎛ ∂U (W ) ⎞ ∂S kin(Wi ) + λ1⎜ ⎟ + λ3⎜ kin i ⎟ = 0 ∂Wi ⎝ ∂Wi ⎠ ⎝ ∂Wi ⎠

⎛ ∂N (Q ) ⎞ ⎛ ∂Upot(Q ) ⎞ k k ⎟⎟ + λ3⎜⎜ ⎟⎟ + λ 2⎜⎜ ⎝ ∂Q k ⎠ ⎝ ∂Q k ⎠

Because λ3 is also the Lagrangian multiplier of the kinetic system, eqs (49), temperature can also be represented by ⎛ ∂S kin(N , Ukin) ⎞ 1 ⎜ ⎟ = ∂Ukin T ⎝ ⎠

Equation 4 introduces the pressure P by ⎛ ∂S(N , U , V ) ⎞ P ⎜ ⎟ ≡ ⎝ ⎠ T ∂V U ,N ⎛ ∂Spot(N , U , V ) ⎞ ⎛ ∂S kin(N , U , V ) ⎞ P = + ⎜⎜ ⎜ ⎟ ⎟⎟ ⎝ ⎠U , N ⎝ ∂V T ∂V ⎠U , N (53)

Taking eq 51a into account, the kinetic entropy term is a function that depends on N and Ukin, allowing the first term of eq 53 to be expressed by ⎛ ∂S kin(N , U , V ) ⎞ ⎜ ⎟ ⎝ ⎠U , N ∂V

The potential system consists of the terms that depend on the Qk in eqs (45) and 48b:

N = N (Q k )

(50b)

Upot = Upot(Q k)

(50c)

V = V (Q k )

(50d)

(4)

and by splitting the entropy according to eq 45a, which yields

(49d)

(50a)

(52)

N

∀i

Spot = Spot(Q k)

1 T

⎛ ∂S (N , Ukin) ⎞ ⎛ ∂Ukin(N , U , V ) ⎞ = ⎜ kin ⎟ ⎜ ⎟ ⎠U , N ∂Ukin ∂V ⎝ ⎠N ⎝

and using eq 52 yields ⎛ ∂S kin(N , U , V ) ⎞ 1 ⎛ ∂U (N , U , V ) ⎞ = ⎜ kin ⎜ ⎟ ⎟ ⎝ ⎠U , N ⎠U , N ∂V ∂V T⎝ 4648

(54)

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research

Figure 1. Velocity vector in Cartesian (a) and spherical (b) coordinate systems.

m, which defines the actual spherical velocity vector (vj, ϑl, φm). These vector components are also actually continuous variables, but in order to use the discrete formulation of the Shannon entropy, they must be discretized in this way. For example one can set the velocity component vl = l·Δv with an arbitrarily small v-step, Δv. In the limiting case Δv → 0, this again reveals the continuous character of the variable. By denoting the probability of the state i =̂ (j,l,m) with wjlm, eq 36 has the following form:

The kinetic term of the internal energy, a function that depends on the variables N, U, and V, can be expressed by Ukin(N , U , V ) = U − Upot(N , U , V )

with the same variables on both sides. The required derivative is ⎛ ∂Upot(N , U , V ) ⎞ ⎛ ∂Ukin(N , U , V ) ⎞ = −⎜⎜ ⎜ ⎟ ⎟⎟ ⎝ ⎠U , N ∂V ∂V ⎝ ⎠U , N

Inserting this expression into eq 54 gives

S kin = −NkB ∑ ∑ ∑ wjlm ln wjlm

⎛ ∂S kin(N , U , V ) ⎞ 1 ⎛ ∂Upot(N , U , V ) ⎞ ⎟⎟ = − ⎜⎜ ⎜ ⎟ ⎝ ⎠U , N T⎝ ∂V ∂V ⎠U , N

j

l

m

For the system under consideration, isotropy is assumed, which means that no preferred direction for the movement exists. Therefore, all states (j,l,m) with a given velocity vj have the same probability. From another viewpoint, isotropy is also arguably a consequence of the maximum entropy principle. One could imagine occupation numbers for two directions of motion, say Ls and Mt. Because velocity and direction are independent, Shannon entropy can be split into the corresponding terms, that is, velocity and two “directional” subsystems, similar to the kinetic and potential subsystems. Besides the normalization condition, no additional constraints with regard to the occupation numbers Ls and Mt exist. When the maximum entropy principle is applied to both directional subsystems according to eq 14, this yields a uniform distribution of the occupation numbers Ls and Mt, confirming the above-mentioned isotropy. With reference to section 2.3, these states can be gathered into classes, and the probability wj ̂ of the velocity class (j) is

and inserting the expression into eq 53 finally yields ⎛ ∂Spot(N , U , V ) ⎞ ⎛ ∂Upot(N , U , V ) ⎞ ⎟⎟ ⎟⎟ − ⎜⎜ P = T ⎜⎜ ∂V ∂V ⎝ ⎠U , N ⎝ ⎠U , N (55)

This result actually relates the five variables P, T, N, U, and V. Together with eq 3, the internal energy can be eliminated, resulting in the desired general equation of state F(P, T, N, V) = 0. This is a very general formulation of the thermal equation of state, which depends solely on the potential terms of the internal energy and entropy. The great benefit of eq 55 is that only the evaluation of the potential system is necessary. For most common fluids, the kinetic system will be the same, and its evaluation will be performed in the next section. The common Lagrangian multiplier λ3 defines temperature and links the kinetic and potential systems. One can say that the kinetic and potential systems inherit the same temperature.

wj ̂ =

∑ ∑ wjlm l

5. THE KINETIC SYSTEM 5.1. The Kinetic Entropy. As discussed in the previous section, the kinetic state of a particle is defined by its velocity vector. The state is actually composed of three single and independent states, that is, the components of the velocity vector. Using the spherical coordinates (v, ϑ, φ) gives v as the norm of the velocity vector, and ϑ and φ denote the directions of movement (see Figure 1a,b). Hence, instead of one index i, the kinetic state is characterized by the combination of j, l, and

m

With ĝj the number of states in velocity class (j), that is, the degeneracy factor of this class, eqs (20) and (35) can be used to get: S kin = −NkB ∑ wĵ ln j

wĵ gĵ

Considering occupation numbers N̂ j = Nŵ j for particles with the same velocity vj yields 4649

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research S kin = −kB ∑ Nĵ ln j

N=

Nĵ gĵ N

∑ Nl̂ l

S kin = −kB ∑ Nĵ ln (56a)

j

Nĵ cj 2 N

(59)

5.3. Energy Classes. It is more convenient to deal with energy classes than with velocity classes. Equations (56) are then replaced by

(56b)

What remains is the calculation of the degeneracy factor ĝj. 5.2. Degeneracy. We can imagine the velocity states to correspond to velocity vectors aiming at grid points in a Cartesian coordinate system. The coordinates of the grid points are given by the integer triplet (vx,vy,vz), which represents the Cartesian coordinates of the velocity vector. The number ΔZ of states in an arbitrary subspace can be set proportionally to the volume of the subspace. The subspace of interest is a sphere shell with radius v and thickness Δv (Figure 2), representing all

S kin = −kB ∑ Nj ln j

N=

Nj gj N

∑ Nl

(60a)

(60b)

l

where the Ni represent the number of particles in the kinetic energy class i, defined by ei = iΔe

(61)

As an analogy to the velocity classes defined by eq 57, ei is a multiple of the arbitrary energy quantum Δe, and gi is the degeneracy factor for the energy classes. Energy e and velocity v of a particle are related by e=

mv 2 2

with m as the mass of the particle, and from that follows: v2 = Δv =

2e m Δe 2em

The last step is justified for infinitesimally small values Δv and Δe (cf. remarks on Δv in section 5.2). These two expressions must be inserted into eq 58. Performing the same steps as before results in the desired degeneracy factor gi for the energy classes as

gi = c i

(62)

with an again arbitrary constant c. Equation 59 is now replaced by

Figure 2. Discretization of the velocity: 2-dimensional x-y-section, the “X” in the center represents the z-axis, the red marked area indicates class j = 1.

S kin = −kB ∑ Ni ln i

states inside velocity class (j), introduced in section 5.1. Δv can be chosen arbitrarily small to satisfy the continuous character of velocity vectors. Therefore, all states inside the sphere shell can be assigned the same velocity vj which is the j-fold of Δv: vj = jΔv

(63)

5.4. Evaluation of the Kinetic System. The kinetic energy of a particle is given by eq 61, and the corresponding occupation numbers are the Ni as used in eq 63. The corresponding normalization condition is given by eq 60b. The kinetic term of the internal energy can now be written as

(57)

Ukin(Ni) =

The number ΔZ of states inside this sphere shell is ΔZ = c 4πvj2Δv

Ni c i N

∑ Nii Δe

(64)

i

(58)

The goal is to find the distribution of the occupation numbers Ni, which maximizes the kinetic entropy term Skin while considering the constraints N and Ukin. This is performed by applying Lagrange’s method of undetermined multipliers to the kinetic system of equations 49:

with the proportional constant c, and because of eq 57: ΔZj = c 4πj 2 Δv 3

This number corresponds to the degeneracy factor ĝj of states which have the same velocity vj. Embedding Δv and all other constant factors into the proportionality factor c yields

S kin = −kB ∑ Ni ln i

gĵ = ΔZj = cj 2

N=

and the Shannon entropy of the system is given by

∑ Ni i

4650

Ni c iN

(65a)

(65b) DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research Ukin =

q = e−Δe /(kBT )

∑ Nii Δe

(65c)

i

⎛ ∂N (Wi ) ⎞ ⎛ ∂U (W ) ⎞ ∂S kin(Wi ) + λ1⎜ ⎟ + λ3⎜ kin i ⎟ = 0 ∂Wi ⎝ ∂Wi ⎠ ⎝ ∂Wi ⎠

Hence, q is a measure of the thermodynamic temperature! It varies within the range of 0 ≤ q ≤ 1, with q = 0 for T = 0, and q → 1 for T → ∞. Raising q to the power of i and considering eq 61 yields

∀i (65d)

qi = e−ei /(kBT )

The details of this calculation are presented in the Supporting Information. The results are summarized here: the distribution of the occupation numbers is given by Ni = N

i qi

qi can therefore be considered the Boltzmann factor. 5.4.2. The Maxwell−Boltzmann Distribution of Energies. By means of eq 66, relative occupation numbers can be introduced:

i = 1 ...∞

∑j j q j

(66)

Xi =

with parameter q defined implicitly by the transcendent function

( ) 1 Li(− 2 , q)

i qi Ni = ∑ Ni ∑k k qk

Li − 2 , q

(67)

Li is the polylogarithm, defined by

i qi

f (i ) =



∫0



Li(z , q) =

∑i

−z i

q

Because the polylogarithm is a transcendent function, it is not possible to express q explicitly as q(Ukin, N). The same applies to the target function that is given, just as the occupation numbers, as function that depends on q and N:

∫0

⎛ Li − 3 , q ⎞ ⎛ 1 ⎞⎟ 2 ⎜ S kin(N , q) = −NkB⎜ ln q − ln Li⎜ − , q⎟⎟ 1 ⎝ 2 ⎠ ⎝ Li − 2 , q ⎠

(72)



k q k dk =

π 2(− ln q)3/2

and the density function becomes

) )

f (i ) = (68)

2 ( −ln q)3/2 i qi π

To transform this expression to the density function f(e), eq 61 must be taken into account. Performing the transformation yields the Maxwell−Boltzmann distribution:

At first glance it seems to be a drawback that the resulting entropy term does not explicitly depend on the variables Ukin and N. To reach the final goal, however, the evaluation of eq 55, eqs 67 and 68 provide the appropriate derivatives. 5.4.1. The Parameter q. The true meaning of q is revealed when eqs 67 and 68 are used to calculate the derivative (∂Skin(Ukin,N)/∂Ukin)N, which equals 1/T according to eq 52:

f (e) =

3/2 2 ⎛ 1 ⎞ e exp( −e/kBT ) ⎜ ⎟ π ⎝ kBT ⎠

(73)

To compare the discrete distribution, eq 71, and the continuous Maxwell−Boltzmann distribution, eq 73, within one diagram, one has to consider that the values on the x-axis of the discrete representation are absolute values of the relative fractions. Meanwhile, the x values of the continuous distribution represent relative fractions that are related to an energy interval. When the reference interval of the continuous distribution is set equal to the energy step Δe of the discrete distribution, both diagrams can be compared. Additionally, parameter q of the discrete distribution must be related to the temperature T of the Maxwell−Boltzmann distribution by using eq 70. The comparison is shown in Figure 3 for T = 300 K, T = 1000 K, and Δe = 0.005 eV. The agreement is very good, yet not perfect, but the difference vanishes for Δe → 0. Equation 71 can therefore be considered as a discrete representation of the Maxwell−Boltzmann distribution. 5.4.3. The Caloric Equation of State of an Ideal Gas. The caloric equation of state is an expression of the form U = U(T,V,N). In the special case of an ideal gas, the whole internal energy consists only of the total kinetic energy of the particles, which does not depend on the system volume. Hence, the caloric equation of state has the general form

⎛ ∂S kin(Ukin , N ) ⎞ ⎜ ⎟ ∂Ukin ⎝ ⎠N ⎛ ∂S (N , q) ⎞ ⎛ ∂q(Ukin , N ) ⎞ = ⎜ kin ⎟ ⎜ ⎟ ∂q ⎝ ⎠ N ⎝ ∂Ukin ⎠ N ⎛ ∂S (N , q) ⎞ ⎛ ∂Ukin(N , q) ⎞−1 1 = ⎜ kin ⎟ ⎜ ⎟ = ∂q ∂q T ⎝ ⎠N ⎝ ⎠N

Considering the derivative of the polylogarithm, ∂Li( −z , q) 1 = Li( −z − 1, q) ∂z q

k q k dk

where i is now a continuous variable, and f(i) di is the fraction of particles in the interval of di around i. The integral in the denominator is

i=1

( (

(71)

The Xi describe a discrete density distribution; that is, according to eq 61 they give the fraction of particles with kinetic energy ei = i Δe. Setting the limit Δe → 0 yields the density function f(i):

3

Ukin(N , q) = N Δe

(70)

(69)

which yields ⎛ ∂S kin ⎞ k ln q 1 = ⎜ ⎟ =− B ∂ Δ U e T ⎝ kin ⎠ N

and finally results in 4651

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research 2

lim(ln q)

q→1

( 5 ) = 15 1 4 Li(− 2 , q) Li − 2 , q

⎛ Li − 3 , q ⎞2 9 2 ⎜ ⎟ = lim(ln q)2 ⎜ ⎟ 1 q→1 4 ⎝ Li − 2 , q ⎠

( (

) )

yielding the final result: CVT →∞ =

Ukin = Ukin(T , N )

Considering that, according to eq 70, parameter q is exclusively a temperature function, that is, q = q(T), eq 67 already represents the caloric equation of state: Li − 2 , q

(74)

5.4.4. The Heat Capacity of an Ideal Gas. The heat capacity at constant volume is generally defined by ⎛ ∂U (T , V , N ) ⎞ CV ≡ ⎜ ⎟ ⎝ ⎠V , N ∂T

6. THE POTENTIAL SYSTEM As shown in the previous section, all models in form of the system of eqs (45) yield a distribution of the occupation numbers Ni, which represents the Maxwell−Boltzmann distribution of energies. The potential state, however, is more complicated, and a general solution in the former sense cannot be given. The reason is that the potential energy depends on the relative positions of all particlesfor real systems, a number on the order of Loschmidt’s constantand an exact expression for this term cannot practically be given. Even if one could deal with such a huge number of variables, the potential term actually depends on the interaction forces between the particles, the microscopic structures, shapes, and sizes of the molecules. Therefore, it is not possible to present a general solution for the potential system that is comparable to the Maxwell−Boltzmann distribution for the kinetic system. Instead, the process of modeling the ideal gas will be presented, and how the method can be extended to describe real systems will be discussed. 6.1. The Potential Entropy. The potential state of a particle is defined by the coordinates of its positional vector. The system volume can be divided into equally sized volume cells, and each cell is assigned an index number (Figure 4). The potential state of the particle is then given by its cell index. Each cell has the same volume V0, within which the exact position is not defined. With qi representing the probability of the particle to be in cell i, and Qi = Nqi, the corresponding occupation number, the potential term of the Shannon entropy, eq 37, is now

(75)

where U is the internal energy, which in the case of an ideal gas represents the kinetic energy, given by the caloric equation of state, eq 74. The derivative can be rewritten as ⎛ ∂U (q , N ) ⎞ ⎛ ∂q(T ) ⎞ CV = ⎜ kin ⎟ ⎜ ⎟ ∂q ⎝ ⎠ N ⎝ ∂T ⎠

(76)

For the first factor, the derivative of Ukin with respect to q, insertion of expression 74 yields ⎧ 5 ⎛ Li − 3 , q ⎞2 ⎫ ⎛ ∂Ukin(q , N ) ⎞ N Δe ⎪ Li − 2 , q 2 ⎜ ⎟⎪ ⎬ ⎨ −⎜ ⎜ ⎟ = ⎟⎪ 1 1 ∂q q ⎪ Li − , q ⎝ ⎠N Li − , q ⎝ ⎠ 2 2 ⎭ ⎩

( (

) )

( (

) )

(77)

The derivative of q, as given in eq 70 with respect to temperature, is

⎛ ∂q ⎞ k ⎜ ⎟ = B q(ln q)2 ⎝ ∂T ⎠ Δe

(78)

Inserting eqs 77 and 78 into eq 76 yields ⎧ 5 ⎛ Li − 3 , q ⎞2 ⎫ Li − 2 , q ⎪ 2 ⎜ ⎟⎪ ⎬ CV = NkB(ln q)2 ⎨ −⎜ ⎟⎪ 1 1 ⎪ Li − , q Li − , q ⎝ ⎠⎭ 2 2 ⎩

( (

) )

( (

) )

(80)

5.4.5. Discussion. It is an outstanding feature of discrete modeling that, although the primary goal is to derive equations of state, the distribution of the occupation numbers is also obtained. In the case of the kinetic system, this yields the Maxwell−Boltzmann distribution, eq 73. In the course of this derivation, the general validity of this distribution with reference to real gases and fluids becomes obvious from the viewpoint of discrete modeling. Additionally, a discrete representation of the Maxwell−Boltzmann distribution, eq 71, combined with a discrete version of the Boltzmann factor, eq 70, was derived. In the case of an ideal gas, the discrete representation of the caloric equation of state, eq 74, and the heat capacity of the ideal gas, eq 80, could also be deduced. These results can be regarded as striking proofs for the applicability of the presented method.

Figure 3. Combined plot of the discrete (crosses, eq 71) and continuous (lines, eq 73) Maxwell−Boltzmann distributions for T1 = 300 K, T2 = 1000 K, and Δe = 0.005 eV.

3 ( ) Ukin = Ukin(q(T ), N ) = N Δe 1 Li(− 2 , q)

3 NkB 2

Spot = −kB ∑ Q i ln

(79)

i

For infinite temperature, q approaches 1, q → 1. The corresponding limits of interest of the polylogarithm are

Qi N

(81)

with the corresponding normalization constraint 4652

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research

example, by the physical boundaries of a gas reservoir. In these cases, eq 55 takes on the general form P=

(82)

i

The system volume V in this model does not depend on the occupation numbers; instead, it is a priori given. In the following example, an ideal gas will be considered, which means that the potential energy vanishes. In the context of Lagrange’s method, the potential system, eqs (50), is reduced to the target function 81, and the normalization condition 82, and eq 50e without the λ3 and λ4 terms. As a general property of the Shannon entropy, without any constraints besides the normalization condition, the maximum value is achieved for the uniform distribution qi = const = 1/m, where m is the number of possible states and ln m is the corresponding Shannon entropy (cf. Equation 14). The number of possible states is given by the number of cells m=

This gives the term −a/V2 in eq 86, which is typical for the attractive part of the van der Waals-like equations of state. However, the model for the ideal gas used here will be in many cases too simple: real molecules may have a finite size and different shapes. Finite molecular radii prevent the particles from approaching below a limiting distance. This requires an additional constraint for the entropy function. Shapes of molecules lead to entropy terms, which also depend on their orientations. Further enhancements will be caused by multicomponent systems, which were not yet considered.

7. CONCLUSION The main goal of this work was to introduce the method of discrete modeling, which provides a way to derive equations of state. For the systems considered in this paper, the kinetic and potential states of the particles are independent. The maximum entropy principle for systems in thermodynamic equilibrium was realized by Lagrange’s method of undetermined multipliers with entropy as the target function and the thermodynamic state variables, internal energy, system volume, and particle number as constraints. The common set of variables of the target function and constraints are occupation numbers of microscopic states. These two prerequisites, the independent nature of kinetic and potential states and the maximum entropy principle, result in a division into a kinetic and a potential system of equations. Both systems can be treated separately, and are linked by a common Lagrangian multiplier that represents the system temperature. The kinetic subsystem yields the Maxwell−Boltzmann distribution, the caloric equation of state, and the heat capacity of an ideal gas. Because of the variety of different interaction-potentials in real gases and fluids, no explicit equation of state can be given to describe all cases. Instead, the potential subsystem must be evaluated for each class of interaction-potentials separately, based on a relation derived in this paper. The key to modeling real gases and fluids consists in the discovery of a representation of the potential energy, possibly additional energy terms, the system volume, and the potential entropy term by means of a common set of occupation numbers. In the case of an ideal gas, a simple cell model was used with occupation numbers that specified the number of particles in the cells, that is, similar to a density distribution. In this case, neither the potential energy nor the system volume depended on the occupation numbers. Without additional

V V0

and the entropy of a single particle is Spot,s = kB ln

V V0

As already discussed in section 3.2, the particles can be considered independent. The potential term of the entropy is finally given by Spot = NkB ln

V V0

(83)

Because of the vanishing potential part of the internal energy, the equation of state, eq 55, is reduced to the first term, which is derived from eq 83 ⎛ ∂Spot(U , V , N ) ⎞ NkB ⎜⎜ ⎟⎟ = V V ∂ ⎝ ⎠U , N

(84)

and gives the ideal-gas equation: pV = NkBT

(86)

Hence, the process of derivation of an equation of state is reduced to that of finding a model for the potential energy, Upot(U,V,N) in eq 86, which originated from eq 55. This means finding an expression for the potential energy of a system, given the total energy, the volume and the particle number, in contrast to common problems where the total energy is given by several energetic terms, for example, kinetic and potential energy. As a simple example, a basic model should be considered that represents an attractive potential depending on the density: a Upot = − V

Figure 4. Volume cells.

∑ Qi = N

⎛ ∂Upot(U , V , N ) ⎞ NkBT − ⎜⎜ ⎟⎟ ∂V V ⎝ ⎠U , N

(85)

The exact value of the potential term of the Shannon entropy, eq 83, depends on the choice of V0. Nevertheless, the resulting equation, eq 85, is not affected by this choice. In the presented example, a model was introduced in which the potential entropy does actually not depend on the occupation numbers. With consideration for eq 83, this is caused by the assumption that the system volume does not depend on the occupation numbers. This may be a good approach for systems where the volume is a priori defined, for 4653

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654

Article

Industrial & Engineering Chemistry Research

(5) Kapur, J. N., Kesavan, H. K. Entropy Optimization Principles with Applications; Academic Press, Inc.: 1992. (6) Ben-Naim, A. Entropy demystified; World Scientific Publishing Co. Pte. Ltd.: 2008. (7) Ben-Naim, A. A Farewell to Entropy: Statistical Thermodynamics Based on Information; World Scientific Publishing Co. Pte. Ltd.: 2008. (8) Pfleger, M.; Wallek, T.; Pfennig, A. Constraints of Compound Systems: Prerequisites for Thermodynamic Modeling Based on Shannon Entropy. Entropy 2014, 16, 2990−3008. (9) Prausnitz, J. M., Lichtenthaler, R. N., de Azevedo, E. G. Molecular Thermodynamics of Fluid-Phase Equilibria, 3rd ed.; Prentice Hall, Inc.: 1999. (10) Jaynes, E. T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620−630. (11) Brillouin, L. Thermodynamics, Statistics, And Information. Am. J. Phys. 1961, 29, 318. (12) Jaynes, E. T. Gibbs vs Boltzmann Entropies. Am. J. Phys. 1965, 33, 391. (13) Tribus, M.; Shannon, P. T.; Evans, R. B. Why Thermodynamics Is a Logical Consequence of Information Theory. AIChE J. 1966, 12, 244−248. (14) Jaynes, E. T. Violation of Boltzmann’s H Theorem in Real Gases. Phys. Rev. A 1971, 4, 747. (15) Wehrl, A. General Properties of Entropy. Rev. Mod. Phys. 1978, 50, 221. (16) Jaynes, E. T. Papers on Probability, Statistics and Statistical Physics; D. Reidel Publishing Company: 1982. (17) Guiasu, S.; Shenitzer, A. The Principle of Maximum Entropy. Math. Intell. 1985, 7, 42−48. (18) Kapur, J. N. Maximum Entropy Models in Science and Engineering; Academic Press, Inc.: 1989. (19) Kapur, J. N. Entropy Optimization Principles with Applications; Academic Press, Inc.: 1992. (20) Caves, C. M. Information and Entropy. Phys. Rev. E 1993, 47, 4010. (21) Landau, L. D., Lifschitz, E. M. Lehrbuch der Theoretischen Physik, Band 5; Akademie-Verlag: Berlin, 1978.

constraints, this resulted in an isotropic particle distribution and the ideal gas equation. As soon as interacting forces are considered, the potential energy and system volume will also depend on the occupation numbers. When considering interaction forces, the stringent demand for statistical independence of subsystems may not be assumed any longer. This case suggests another point of view on the probabilities of states for subsystems: namely, to treat these probabilities a priori as probabilities “in the surrounding of neighboring subsystems” and not as probabilities of the isolated particles. In a system already consisting of a huge number of particles, the properties of these particles, and thus their respective probability distributions, will not change no matter whether the system is enlarged or reduced by a minor amount of particles. As a consequence, the constraints also have to be modeled for the particles in their environment. From this viewpoint, the assumption of homogeneity of entropy may still be appropriate. One of the main advantages of discrete modeling is that it is possible to account for different physical phenomena by considering different forms of energy, for example, rotational and vibrational fractions of energy, while taking advantage of the additivity of extensive functions. In this way, the respective physical phenomena can clearly be related to distinct parts of the resulting equation of state. With reference to the energetic interactions considered, practical applications of discrete modeling merely require the predefinition of states for individual molecules. The method then delivers the resulting equilibrium distribution of discrete states along with the thermodynamic system variables in equilibrium. The availability of this distribution is a novelty, as compared to classical modeling methods, which are usually based on averaged molecular properties, but not discrete molecular states. The results of this research have opened avenues to a diverse range of applications for modeling molecules with complex shapes and strong interaction forces.



ASSOCIATED CONTENT

S Supporting Information *

Evaluation of the kinetic system by applying Lagrange’s method of undetermined multipliers. This material is available free of charge via the Internet at http://pubs.acs.org.



AUTHOR INFORMATION

Corresponding Author

*Tel.: +43 (0)676 40 58 140. E-mail: martin.georg.pfleger@ gmail.com. Notes

The authors declare no competing financial interest.

■ ■

ACKNOWLEDGMENTS The authors gratefully acknowledge support from NAWI Graz. REFERENCES

(1) Shannon, C. E. A Mathematical Theory of Communication. Bell Syst. Technol. J. 1948, 27 (379−423), 623−656. (2) Jaynes, E. T. Information Theory and Statistical Mechanics. II. Phys. Rev. 1957, 108, 171−189. (3) Lin, S.; Kernighan, B. W. An Effective Heuristic Algorithm for the Travelling-Salesman Problem. Oper. Res. 1973, 21, 498−516. (4) Kapur, J. N. Maximum-Entropy Models in Science and Engineering, revised ed.; John Wiley & Sons, Inc.: 1993. 4654

DOI: 10.1021/ie504919b Ind. Eng. Chem. Res. 2015, 54, 4643−4654