Statistical Teleodynamics: Toward a Theory of Emergence - Langmuir

Aug 9, 2017 - Experimentally, one of the current standards for determining microscopic PSD is using ... Abstract | Full Text HTML | PDF w/ Links | Hi-...
0 downloads 15 Views 2MB Size
Subscriber access provided by University of Rochester | River Campus & Miner Libraries

Article

Statistical Teleodynamics: Toward a Theory of Emergence Venkat Venkatasubramanian Langmuir, Just Accepted Manuscript • DOI: 10.1021/acs.langmuir.7b02166 • Publication Date (Web): 09 Aug 2017 Downloaded from http://pubs.acs.org on August 13, 2017

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

Langmuir is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 55 122

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir V. Venkatasubramanian et al. / Physica A 435 (2015) 120–138

a

b

c

Fig. 1. Spreading of the pay distribution under competition in an ideal free market environment.

Addressing one of the two conceptual challenges, Venkatasubramanian proposed an information-theoretic framework [41,42] wherein he identified that entropy really is a measure of fairness in a distribution, not just randomness or uncertainty, which then makes it an appropriate candidate in economics. In this paper, we follow up on this line of inquiry and address the other critical challenge of reconciling the behavior of goal-driven, teleological, agents with that of purposefree, randomly driven molecules. This resolution helps us address the inequality questions we raised above. We first start from a familiar ground in economics, namely, game theory, to develop a new conceptual microeconomic framework. This leads to surprising and useful insights about a deep connection between game theory and statistical mechanics, paving the way for a general theoretical framework that unifies the dynamics of purposeful animate agents with that of purpose-free inanimate ones. 2. Pay distribution in an ideal free market environment: formulating the problem We follow Venkatasubramanian’s [42] approach in formulating the problem and restate it here for the convenience of the reader. Consider the following gedankenexperiment where we study a competitive, dynamic, free market environment comprising of a large number of utility maximizing rational agents as employees and profit maximizing rational agents as corporations. We assume an ideal environment where the market is perfectly competitive, transaction costs are negligible, and no externalities are present. In this ideal free market, employees are free to switch jobs and move between companies in search of better utilities. Similarly, companies are free to fire and hire employees in order to maximize their profits. We do not consider the effect of taxes. We also assume that a company needs to retain all its employees in order to survive in this competitive market environment. Thus, a company will take whatever steps necessary, allowed by its constraints, to retain all its employees. Similarly, all employees need a utility to survive and that they will do whatever is necessary, allowed by certain norms, to stay employed. We assume that neither the companies nor the employees engage in illegal practices such as fraud, collusion, and so on. In this ideal free market, consider a company A with N employees and a salary budget of M, with an average salary of Save = M /N. Let us assume that there are n categories of employees — ranging from secretaries to the CEO, contributing in different ways towards the company’s overall success and value creation. All employees in category i contribute value Vi , i ∈ {1, 2, . . . , n}, such that V1 < V2 < · · · < Vn . Let the corresponding value at Save be Vave , occurring at category s. Since all employees are contributing unequally, some more some less, they all need to be compensated differently, commensurate with their relative contributions towards the overall value created by the company. Instead, A has an egalitarian policy that all employees are equal and therefore pays all of them the same salary, Save , irrespective of their contributions. The salary of the CEO is the same as that of an administrative assistant in the mail room. This salary distribution is a sharp vertical line at Save , as seen in Fig. 1(a), a Kronecker delta function. As noted, while this may seem fair in a social or moral justice sense (this distribution has a Gini coefficient of 0, which according to the Gini measure is the fairest outcome, but more about this later in the paper), clearly it is not in an economic sense. If this were to be the only company in the economic system, or if A is completely isolated from other companies in the economic environment, the employees will be forced to continue to work under these conditions as there is no other choice. However, in an ideal free market system there are other choices. Therefore, all those employees who contribute more than the average — i.e., those in value categories Vi such that Vi > Vave (e.g., senior engineers, vice presidents, CEO), who feel that their contributions are not fairly valued and compensated for by A, will therefore be motivated to leave for other companies where they are offered higher salaries. Hence, in order to survive A will be forced to match the salaries offered by others to retain these employees, thereby forcing the distribution to spread to the right of Save , as seen in Fig. 1(b). At the same time, the generous compensation paid to all employees in categories Vi such that Vi < Vave , will motivate candidates with the relevant skill sets (e.g., low-level administration, sales and marketing staff) from other companies to compete for these higher paying positions in A. This competition will eventually drive the compensation down for these overpaid employees forcing the distribution to spread to the left of Save , as seen in Fig. 1(c). Eventually, we will have a

ACS Paragon Plus Environment

Langmuir 132

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 2 of 55

V. Venkatasubramanian et al. / Physica A 435 (2015) 120–138

Table 2 Model predictions and sample empirical data for different countries. Country Year Currency Minimum Average Maximum

µ σ Gini Bottom 90% ideal Top 10%–1% ideal Top 1% ideal Bottom 90% data Top 10%–1% data Top 1% data Bottom 90% ψ Top 10%–1% ψ Top 1% ψ

Norway 2011 NOK 139,152 319,391 3,874,714 12.52 0.55 0.30 76.6% 19.5% 3.8% 71.7% 20.5% 7.8% −6.5% 5.1% 104.2%

Sweden 2012 SEK 159,630 249,760 2,580,353 12.32 0.46 0.26 79.3% 17.5% 3.1% 72.1% 20.8% 7.1% −9.1% 18.4% 128.1%

Denmark 2008 DKK 146,919 202,502 1,747,643 12.13 0.41 0.23 80.8% 16.5% 2.8% 73.8% 20.1% 6.1% −8.6% 22.2% 117.4%

Switzerland 2010 CHF 37,266 67,056 1,047,750 10.96 0.56 0.31 76.6% 19.6% 3.8% 66.5% 22.9% 10.6% −13.2% 16.7% 177.3%

Netherlands 1999 NLG 28,142 63,557 416,931 10.96 0.45 0.25 79.7% 17.2% 3.0% 71.9% 22.7% 5.4% −9.8% 31.7% 77.8%

Australia 2010 AUD 29,716 49,304 688,700 10.67 0.52 0.29 77.6% 18.9% 3.6% 69.0% 21.8% 9.2% −11.0% 15.7% 156.6%

France 2006 EUR 15,885 26,872 366,769 10.06 0.52 0.29 77.6% 18.8% 3.6% 67.2% 23.9% 8.9% −13.4% 26.7% 150.5%

Germany 2008 EUR 17,927 29,826 630,374 10.13 0.59 0.33 75.4% 20.4% 4.2% 60.5% 25.6% 13.9% −19.8% 25.6% 234.3%

Japan 2010 K JPY 1,236 2,255 32,608 7.57 0.55 0.30 76.9% 19.3% 3.7% 59.5% 31.0% 9.5% −22.6% 60.3% 153.8%

Canada 2000 CAD 11,036 24,859 528,156 9.91 0.64 0.35 73.8% 21.6% 4.6% 58.9% 28.0% 13.2% −20.2% 29.7% 184.3%

UK 2011 GBP 11,324 19,217 365,130 9.70 0.58 0.32 75.9% 20.1% 4.0% 60.9% 26.2% 12.9% −19.8% 30.5% 221.0%

US 2013 USD 15,131 52,619 1,424,300 10.58 0.76 0.41 70.0% 24.2% 5.8% 53.0% 29.5% 17.5% −24.3% 21.9% 200.7%

Database [75]. While it would be more accurate to use the 2-class version of our model for the comparison, as it represents the top ∼3% better, it is difficult to determine uniquely the parameters which define where the first population ends and the second begins. Therefore, we use the 1-class model as the ideal reference, which predicts a single lognormal distribution for the entire population. We fully expect real-life free market societies to deviate from ideality, but we are interested in understanding how big the deviations are and why. While the income distributions should be lognormal in the different countries, if they were functioning as ideal free market societies, they may have different means and variances depending on the minimum, average and maximum annual income in the different countries, which determine the µ and σ of the corresponding lognormal distributions. As an example, we show these parameters in Table 2 for the different countries. The minimum, average and maximum income data are obtained from the WTI Database. For the maximum income, we chose to use the threshold income at 99.9%. At this cutoff, the area under the lognormal curve would correspond to 6σ . Thus, we estimate σ by using the approximation 6σ ≈ ln(Maximum income) − ln(Minimum income)

µ = ln(Average income) −

σ

2

2

.

(38) (39)

For UK and Netherlands where the 99.9% threshold data is not available, we found that it is well approximated by the average salary of the top 0.5%, by testing this heuristic for the other countries where the threshold is known. For Switzerland, Sweden, Norway and Denmark, there is no minimum wage requirement and hence that data is not available. For these countries, we consulted several country-specific sources to obtain guidelines about what the typical minimum wage-like compensation might be for entry level positions in recent years (2010–2014). We then used historic data on annual increases for the average income of the bottom 90% of the population to deflate and back calculate the minimum wage for the past years. Once µ and σ are known, we can uniquely determine the corresponding lognormal distribution, and compute the income shares of the bottom 90%, top 10%–1%, and the top 1%. Since µ and σ typically vary from year to year (because of the changes in the minimum, average, and/or maximum income from year to year), the ideal distribution of income to the bottom 90%, top 10%–1%, and the top 1%, as predicted by the model, also varies from year to year, though not by much. As an example, these values are displayed in Table 2 for the twelve countries (which are commonly used examples [8]), for the years shown. Now, we know from empirical data that the free market economies of the Scandinavian countries are generally more fair in the economic treatment of all their citizens, not just the wealthy ones. We also know that US does not do as well. We further know that other Western European countries such as France, Germany, and Switzerland, are somewhere in between. Can the model predict these outcomes? That is, just by knowing only the minimum, average and maximum incomes in a free market society, can the model identify how fair these societies are? To test the model along these lines, we define a new index of inequality that uses the ideal lognormal distribution (one that is appropriate for the country under consideration) as the reference. This new measure, called Nonideal Inequality Coefficient, ψ , is defined as:

ψ=

Actual share Ideal share

− 100%.

(40)

ψ measures the level of nonideal inequality in the system. When ψ is zero, the system has the ideal level of inequality, the fairest inequality. When ψ is small, the level of inequality is almost ideally fair; when it is large, the inequality is more unfair.

ACS Paragon Plus Environment

Page 3 of 55

Langmuir V. Venkatasubramanian et al. / Physica A 435 (2015) 120–138

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

133

Fig. 4. Global income inequality trends over the years: ψ — Ideal vs Reality. Blue lines: bottom 90%; green lines: top 10%-1%; red lines: top 1%; black dashed lines: 0% ideal reference. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

We computed the predicted income shares for the different countries for the period from ∼1920 to ∼2012, depending on the availability of the data in the WTI Database. We then computed ψ for the three segments (bottom 90%, top 10%–1%, top 1%) for these countries, annually, for the corresponding time periods (sample values are shown in Table 2). These are plotted in Fig. 4. If a country was functioning as an ideal free market system, as defined by our theory, the corresponding ψ for the three segments would all be 0. This is the reference line which is shown as the 0% line (black dotted line) in the plots. As one can see, the model’s predictions are in general agreement with what is known about these countries regarding their inequalities. The twelve countries are shown, roughly, in the order of generally increasing inequality according to our model. Our objective here is not to rank them in a strict order but to show how the different countries have deviated from ideality. Let us examine what these charts inform us. As expected, none of them are ideal, but there are some pleasant surprises. Consider Norway as an example. Its bottom 90% and top 10%–1% income shares are remarkably close to the ideal values over the last 20 years.

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Statistical Teleodynamics: Toward a Theory of Emergence Venkat Venkatasubramanian∗ Department of Chemical Engineering Center for the Management of Systemic Risk Columbia University New York E-mail: [email protected] Abstract The central scientific challenge of the 21st century is developing a mathematical theory of emergence that can explain and predict phenomena such as consciousness and self-awareness. The most successful research program of the 20th century, reductionism, which goes from whole to parts, seems unable to address this challenge. This is because addressing this challenge inherently requires an opposite approach, going from parts to whole. In addition, reductionism, by the very nature of its enquiry, typically doesn’t concern itself with teleology or purposeful behavior. Modeling emergence, in contrast, requires addressing teleology. These two requirements, together, present a formidable challenge in developing a successful mathematical theory of emergence. In this paper, I describe a new theory of emergence, called statistical teleodynamics, which addresses certain aspects of the general problem. Statistical teleodynamics is a mathematical framework that unifies three seemingly disparate domains - purpose-free entities in statistical mechanics, human engineered teleological systems in systems engineering, and nature-evolved teleological systems in

1

ACS Paragon Plus Environment

Page 4 of 55

Page 5 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

biology and sociology - within the same conceptual formalism. This theory rests on several key conceptual insights, the most important one being the recognition that entropy models mathematically the concept of fairness in economics and philosophy, and, equivalently, the concept of robustness in systems engineering. These insights help prove that the fairest inequality of income is a lognormal distribution, which will emerge naturally at equilibrium in an ideal free market society. Similarly, the theory predicts the emergence of the three classes of network organization - exponential, scalefree, and Poisson - seen widely in a variety of domains. Statistical teleodynamics is the natural generalization of statistical thermodynamics, the most successful parts-towhole systems theory to date, but this generalization is only a modest step toward a more comprehensive mathematical theory of emergence.

1. Going from Parts to Whole: Background Statistical thermodynamics is the conceptual framework for predicting certain key aspects (such as the equilibrium energy distribution) of the macroscopic behavior of a thermodynamic system given certain properties of the millions of individual entities (e.g., molecules) that make up the system. As "prisoners" of Newton’s Laws and conservation principles, molecules do not have control over their "fates", their dynamical trajectories, and their dynamical evolution is the result of thermal agitations resulting from incessant intermolecular collisions. Nor do the molecules have "free will" that would make them "desire" one path over another. But what if they did? What if the entities possessed free will and had the ability to decide what to do next? What would be a statistical mechanics-like framework for predicting the macroscopic behavior of a large collection of such rational, intelligent, goal-driven agents? This is the question I had asked myself, in 1983, as I was writing my doctoral thesis, under the guidance of Professor Keith Gubbins at Cornell, on the application of statistical mechanics for the prediction of vapor-liquid properties of polar mixtures. At that time, I had also become interested in artificial intelligence (AI), which I pursued subsequently 2

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

upon graduation as a post-doctoral researcher in AI at Carnegie Mellon University. So, as I thought about AI entities and their behavior, this question arose in my mind. As I couldn’t find the framework I was looking for in the literature at that time, I went about developing one in the following years, not realizing that this odyssey would take me thirty four years! This led to a series of papers 1–6 that culminated in my book 7 in 2017. Progress came only in fits and starts, in a series of six key insights, which I call my "happy thought" moments, with years of quiescence in between. What I was after (and I am still on this quest) were the fundamental principles and the mathematical framework that could help predict the macroscopic properties and behavior of a teleological system given the microscopic properties of its rational entities - i.e., going from parts to whole. We know how to do this, in many cases but not all, if the entities are non-rational or purpose-free (such as molecules) - that is what statistical mechanics is all about. But how about rational entities which exhibit purposeful, goal-driven, behavior? Going from parts to whole is the opposite of the reductionist paradigm that was so successful in 20th century science. In the reductionist paradigm, one tries to understand and predict macroscopic properties by seeking deeper, more fundamental, mechanisms and principles that cause such properties and behaviors. Reductionism is a top-down framework, starting at the macro-level and then going deeper and deeper, seeking explanations at the micro-level, nano-level, and at even deeper levels - i.e., from whole to parts. Physics and chemistry were spectacularly successful pursuing this paradigm in the last century, giving birth to quantum mechanics, general theory of relativity, quantum field theory, the Standard Model, string theory and so on. Even biology pursued this paradigm to phenomenal success and showed how heredity can be explained by molecular structures and phenomena such as the double helix and the central dogma. But many grand challenges that we face in 21st century science are bottom-up phenomena, going from parts to whole. Examples include predicting phenotype from genotype, predicting the effects of human behavior on global climate, and predicting consciousness and self-

3

ACS Paragon Plus Environment

Page 6 of 55

Page 7 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

awareness, quantitatively and analytically. Can quantum mechanics or the Standard Model predict the emergence of life? A single neuron is not self-aware, but when 100 billion of them are organized in a particular configuration, the whole system is spontaneously selfaware. How does this happen? Or even more ambitiously, can we systematically predict the emergence of the following chain of domains, from physics to chemistry to biology to sociology to economics to..., with all their rich details? A comprehensive theory of emergence, if it exists, ought to be able to do this, at least at some level of detail that can be considered satisfactory. Such a theory would be the Theory of Everything, and not, in my opinion, the elusive grand unified theory of the four fundamental forces as theoretical physicists typically like to claim. Reductionism, by its very nature, cannot help us solve these problems, because addressing this challenge requires the opposite approach, going from parts to whole. In addition, reductionism, by the very nature of its enquiry, typically doesn’t concern itself with teleology or purposeful behavior. Modeling bottom-up phenomena, in contrast, requires addressing this important feature as teleology-like properties often emerge at the macroscopic levels, either explicitly or implicitly, even in the case of purpose-free entities. So, we need a new paradigm, a bottom-up analytical framework, a constructionist or emergentist approach, the opposite of the reductionist perspective, to go from parts to whole. By a bottom-up constructionist framework, I do not mean the mere discovery of hidden patterns, such as in the performance of a "deep learning" neural network that discerns complex statistical correlations in huge amounts of data. I mean the need for a comprehensive mathematical framework that can explain and predict macroscopic behavior and phenomena from a set of fundamental principles and mechanisms. A theory that can not only predict the important qualitative, and quantitative, emergent macroscopic features, but can also explain why and how these features emerge and why not some other outcomes. The current "deep learning" AI systems lack this ability. Developing such a theoretical framework is what I was (and I am still) after all these years. May be this is an impossible dream, but it is too early to give up on it. While

4

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

emergent phenomena are seen in systems involving purpose-free entities also, as in pattern formation in physical systems, my focus has been on emergence involving purpose-driven agents. After nearly two decades of almost no progress, in 2003, I had my first break when I started thinking about the design of self-organizing adaptive complex networks for optimal performance in a given environment. Our work 1 on this topic gave me the first insight: how the microstructure of a network, i.e., the entity- or part-level properties such as the vertex degree in a graph, is related to its macroscopic, system-level, properties, such as its survival purpose, and how the environment plays a critical role in determining the optimal balance between efficiency and robustness trade-offs in design through network topology. This study led me, in 2004, to investigate the maximum entropy framework in the design of such networks. 2 This resulted in the formulation of my initial theoretical ideas about a new constructionist or emergent framework, in 2007, which I have named statistical teleodynamics 3 . The name comes from telos, a Greek word meaning goal or purpose or end. Just as the dynamics of molecules are driven by thermal agitation, leading to thermodynamics, the dynamics of rational entities (such as people) are driven by their goals, giving us teleodynamics. Because of the stochastic nature of this dynamics, we have statistical teleodynamics 1 . As we know, statistical mechanics or statistical thermodynamics is the theory of interactions and emergent behavior of millions of goal-free entities, namely, molecules. This is a theory of chance interactions - probability theory is at its foundation. At the other extreme, for goal-driven strategic interactions, the standard conceptual framework for modeling is game theory. Game theory arose from the study of games of strategy, such as chess, whereas probability theory arose from the study of games of pure chance as in gambling. In our 2015 paper, 6 we showed that for a class of games called potential games played by millions of rational agents, the game of strategy becomes essentially equivalent to a game of chance, 1

It turns out that Professor Terrance Deacon of the University of California, Berkeley, had also coined the term teleodynamics independently. However, I was the first to define and use the term statistical teleodynamics.

5

ACS Paragon Plus Environment

Page 8 of 55

Page 9 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

thereby unifying game theory and statistical mechanics yielding statistical teleodynamics. While many elements of this theory have appeared elsewhere, 1–7 the background, organization, presentation, and discussion in this paper have new material. Furthermore, as a summary of six papers and a book, I believe this paper will be helpful to a new reader who is not familiar with the literature on complexity science, emergence, teleological systems, game theory, information theory, systems engineering, economics, and political philosophy, all of which are drawn upon in my theory. Normally, Langmuir wouldn’t be the right venue for this contribution, but what makes it suitable in the present context, in my opinion, is that this issue is a festschrift devoted to Professor Keith Gubbins. It seems appropriate to publish it here as my long pursuit started in his group. In addition, generally festschrifts enjoy certain amount of latitude in their choice of papers and topics that goes beyond the narrow scope of the journal under consideration. Further, this paper is on statistical thermodynamics, a central topic for Langmuir, and its generalization to statistical teleodynamics. Hence, I felt comfortable submitting this contribution to this special issue. Also, since this is a festschrift, I have adopted a narrative style that is more personal and reflective than what I normally practice in my scientific papers. I thought this narrative style of providing some background information on how the progress came about, what the key turning points were, etc. might make it a more interesting read. In the following sections, I will summarize the key aspects of statistical teleodynamics, starting first with a gedankenexperiment which suggests how to attack mathematically the central question raised above. Then I will discuss the potential game formulation, its solution, and its deep connection with entropy and statistical mechanics. I will then discuss the surprising insight that entropy really is a measure of fairness in a distribution. This insight has far reaching consequences in economics, sociology, and political philosophy, and for the important question of income inequality. I will compare the theory’s predictions with real-life income distribution data from different countries and conclude by discussing future directions.

6

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

2. Dynamic Behavior of Goal-driven Rational Agents As we generalize statistical thermodynamics for rational agents, it is important to recognize that the critical new feature is that the entities pursue goals. Molecules do not have this feature, obviously. It was not at all clear how to incorporate this feature, naturally and mathematically, into the statistical mechanics framework. This was a major conceptual challenge for me for a long time. It was not even clear what kind of questions that I should be asking to facilitate this new formulation. But I knew intuitively, all through this struggle, that the key challenges would be conceptual and not mathematical, because that’s how they usually are. One has many such examples in the birth of quantum mechanics, statistical mechanics, relativity, and so on. All these required major conceptual struggles over many years, but the mathematical tools were already available and ready to be put to use once the conceptual difficulties were resolved. For example, matrices, probability theory, and partial differential equations, the main tools of quantum mechanics, had been around for a long time – the main challenges were conceptual. The key breakthrough that paved the way for thermodynamics was the concept of entropy. Subsequently, in its statistical formulation, it was the conceptual insight of Boltzmann that entropy was related to multiplicity through his famous S = kB lnW . The mathematics of statistical thermodynamics, namely, probability theory, had been around for quite a while. Similarly for relativity, both the special and the general theory. The mathematics of the special theory is just elementary high school algebra, but the conceptual challenges and breakthroughs were colossal. The general theory required much more sophisticated mathematics, to be sure, but it was readily available, thanks to Riemann. The central challenge for Einstein was getting the concepts correct, which he did through his "happy thought" moment that led to his discovery of the principle of equivalence. With only one exception, mathematics was never the limiting factor, but the concepts were. That only instance in the history of physics, when the mathematical framework was also missing along with the need for a conceptual breakthrough, was in the discovery of the 7

ACS Paragon Plus Environment

Page 10 of 55

Page 11 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

theory of gravitation. Besides the conceptual breakthrough of universal gravitation, Newton had to also develop the mathematical tool needed, namely, calculus. But this is the only exception that I know of. Just to be clear, I wish to emphasize here that by citing such examples, I am not suggesting, even though it might seem like I am, that my theory is anywhere close to these monumental contributions. I only wish to observe that my understanding of the history of the births of these fields gave me a much needed guidance, encouragement, and hope in my long, often confusing and seemingly futile, search. Finally, the breakthroughs came in 2009-2014, in three key conceptual insights – entropy as fairness, the critical question to ask, and potential as entropy – as I discuss below, which led to the papers in 2009, 4 2010, 5 and 2015. 6 In retrospect, the resulting mathematical formulation and its solution seem so obvious to me that I have often wondered why it took me so long or why others missed it. Then again, that’s how it usually is! We now start the mathematical formulation by giving the agents a new property, utility or profit, which they wish to maximize. Now consider the gedankenexperiment summarized below (a much more detailed discussion can be found here 7 ), where we analyze a dynamic, free market environment, comprising of a large number of utility maximizing rational agents as employees and profit maximizing rational agents as corporations. Let us assume an ideal environment where the free market is perfectly competitive, transaction costs are negligible, and no externalities are present. In our ideal free market, employees are free to switch jobs and move between companies in search of better utilities. Similarly, companies are free to fire and hire employees in order to maximize their profits. We also assume that a company needs to retain all its employees in order to survive in this competitive market environment. Thus, a company will take whatever steps necessary, allowed by its constraints, to retain its employees. Similarly, all employees need a salary to survive and that they will do whatever is necessary, allowed by certain norms, to stay employed.

8

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

In this ideal free market, consider a company with N employees and a payroll budget of M , with an average salary of Save = M/N . We assume that there are n categories of employees – ranging from low-level employees to the CEO, contributing in different ways towards the company’s overall success and value creation. All the employees contribute to the company’s overall success in their own ways. All employees in category i contribute value Vi , i ∈ {1, 2, . . . , n}, such that V1 < V2 < · · · < Vn . We assume this value or skills spectrum to be continuous, with no breaks in between, at a level of granularity that is appropriate for this problem. Let the corresponding value at Save be Vave , occurring at category s. Since the employees are contributing unequally, some more some less, they need to be compensated differently, commensurate with their relative contributions towards the overall value created by the company. Instead, the company has an egalitarian policy that all employees are equal, and therefore pays all of them the same salary, Save , irrespective of their contributions. The salary of the CEO is the same as that of a janitor. This salary distribution is a sharp vertical line at Save , as seen in Figure 1(a), a Kronecker delta function. What would happen next? This problem formulation, together with this question, was my key conceptual insight in 2010 that I noted above. This insight tied together, naturally, the concepts of utility or profit maximization, fairness, survival, competition, dynamical evolution, equilibrium, and stability, as we discuss below. While the equality of pay may seem fair in a social or moral justice sense (this distribution has a Gini coefficient of 0), clearly it is not in an economic sense. If this were the only company in the economic system, or if it were completely isolated from other companies in the economic environment, the employees would be forced to continue to work under these conditions as there is no other choice. However, in an ideal free market system there are choices. Therefore, employees who contribute more than the average – i.e., those in value categories Vi such that Vi > Vave (e.g., senior engineers, vice presidents, CEO), who feel that their contributions are not fairly

9

ACS Paragon Plus Environment

Page 12 of 55

Page 13 of 55

valued and compensated, will therefore be motivated to leave for other companies where they are offered higher salaries. Hence, in order to survive the company will be forced to compete and match the salaries offered by others to retain these employees, thereby forcing

Save

Salary (a)

No. of employees

No. of employees

the distribution to spread to the right of Save , as seen in Figure 1(b).

No. of employees

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

Save

Salary (b)

Save

Salary (c)

Figure 1: Spreading of the pay distribution under competition in an ideal free market environment (adapted from Venkatasubramanian et al. 6 ) At the same time, the generous compensation paid to all employees in categories Vi such that Vi < Vave , will motivate candidates with the relevant skill sets (e.g., low-level administration, sales and marketing staff) from other companies to compete for these higher paying positions. This competition will eventually drive the compensation down for these overpaid employees forcing the distribution to spread to the left of Save , as seen in Figure 1(c). Eventually, we will have a distribution that is not a delta function, but a broader one where different employees earn different salaries depending on the values of their contributions as determined by the free market mechanism. The funds for the higher salaries now paid to the formerly underpaid employees (i.e., those who satisfy Vi > Vave ) come out of the savings resulting from the reduced salaries of the formerly overpaid group (i.e., those who satisfy Vi < Vave ), thereby conserving the total salary budget M . Thus, we see that concerns about fairness in pay cause the natural emergence of a more equitable salary distribution in a free market environment through its self-organizing, adaptive, evolutionary dynamics. The point of this analysis is not to model the exact details 10

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 14 of 55

of the free market dynamics but to show that the notion of fairness plays a central role in driving the emergence and spread of the salary (in general, utility) distribution through the free market mechanism. Even though an individual employee cares only about her utility and no one else’s, the collective survival actions of all the employees, combined with the profit maximizing survival actions of all the companies, in an ideal competitive free market environment of supply and demand for talent under resource constraints, lead towards a more fair allocation of pay, guided by Adam Smith’s "invisible hand" of self-organization. Given this free market dynamics scenario, three important questions arise: (i) Will this self-organizing free market dynamics lead to an equilibrium salary distribution or will the distribution continually evolve without ever settling down? (ii) If there exists an equilibrium salary distribution, what is it? (iii) Is this distribution fair?

3. A Potential Game Theoretic Framework We address these questions by developing a microeconomic framework based on potential game theory. Let us now examine the basic question of why people seek employment. At the most fundamental, survival, level, it is to be able to pay bills now so that they can make a living, with the hope that the current job will lead to a better future. One hopes that the present job will lead to a better one next, acquired based on the experience from the current job, and to a series of better jobs in the future, and hence to a better life. Thus, the utility derived from a job is made up of two components: the immediate benefits of making a living (i.e., "present" utility) and the prospects of a better future life (i.e., "future" utility). There is, of course, the cost or disutility of effort to be accounted for as well. Hence, we propose that the overall utility from a job is determined by three dominant elements: (i) utility from salary, (ii) disutility from effort, and (iii) utility from a fair opportunity for future prospects. By effort, we do not mean just the physical effort alone, as we

11

ACS Paragon Plus Environment

Page 15 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

discuss below, even though it is a part of it. Thus, the overall utility for an agent is given by:

hi (Si , Ei , Ni ) = ui − vi + wi

(1)

where hi is the total utility of an employee earning a salary Si by expending an effort Ei , while competing with (Ni −1) other agents in the same job category i for a fair recognition of one’s contributions. u(·) is the utility derived from salary, v(·) the disutility from effort, and w(·) is the utility from a fair opportunity for a better future. Every agent tries to maximize its total utility by picking an appropriate job category i.

3.1 Utility of a Fair Opportunity for a Better Future The first two elements are rather straightforward to appreciate, but the third requires some discussion. Consider the following scenario. A group of freshly minted law school graduates (totaling Ni ) have just been hired by a prestigious law firm as associates. They have been told that one of them will be promoted as partner in eight years depending on their performance. Let us say that the partnership is worth $Q. So any associate’s chance of winning the coveted partnership goes as 1/Ni , where Ni is the number of associates in her peer group i, her local competition. Therefore, her expected value for the award is Q/Ni , and the utility derived from it goes as ln(Q/Ni ) because of diminishing marginal utility. Diminishing marginal utility is a well-known economics concept that states that the incremental benefit derived by consuming an additional unit of a resource decreases as one consumes more and more of it. This property is usually modeled by a logarithmic function. Therefore, the utility derived from a fair opportunity for recognition and career advancement in a competitive environment is:

wi (Ni ) = −γ ln Ni

12

ACS Paragon Plus Environment

(2)

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 16 of 55

where γ is a parameter. This equation models the effect of competitive interaction between agents. Considering the society at large, this equation captures the notion that in a fair society, an appropriately qualified agent with the necessary education, experience, and skills should have a fair shot at growth opportunities irrespective of her race, color, gender, and other such factors – i.e., it is a fair competitive environment. This is the utility derived from equality of access for a better life, for upward mobility. This is the utility of having a fair shot at a better future. The category i would correspond to her qualification category in the society. The other agents in that category are the ones she will be competing with for these growth opportunities.

3.2 Modeling the Disutility of a Job For the utility derived from salary, we employ the commonly used logarithmic utility function:

ui (Si ) = α ln Si

(3)

As for the second element, every job has certain disutility associated with it. This disutility depends on a host of factors such as the investment in education needed to qualify oneself for the job, the experience to be acquired, working hours and schedule, quality of work, work environment, company culture, relocation anxieties, etc. To model this, one can combine u and v to compute

unet = au − bv (a and b are positive constant parameters)

which is the net utility (i.e., net benefit or gain) derived from a job after accounting for its cost. Typically, net utility will increase as u increases (because of salary increase, for example). However, generally, after a point, the cost has increased so much, due to personal sacrifices such as working overtime, missing quality time with family, giving up on hobbies,

13

ACS Paragon Plus Environment

Page 17 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

job stress resulting in poor mental and physical health, etc., unet begins to decrease after reaching a maximum. The simplest model of this commonly occurring inverted-U profile is a quadratic function, as in

unet = au − bu2 .

Since, u ∼ ln(Salary), we get equation (4):

vi (Ei ) = β(ln Si )2

(4)

We believe this is the simplest expression that captures this typical, widely observed, behavior of net utility, increasing first and then decreasing.

3.3 Total Utility of a Job Combining all three, we have hi (Si , Ei , Ni ) = α ln Si − β(ln Si )2 − γ ln Ni

(5)

where α, β, γ > 0. In general, α, β and γ, which model the relative importance an agent assigns to these three elements, can vary from agent to agent. However, we first examine the ideal situation where all agents have the same preferences and hence treat these as constant parameters. In order to move to a job with better utility, an agent needs job offers. So, the employee agents constantly gather information and scout the market, and their own companies, for job openings that are commensurate with their skill sets, experiences, and career and personal goals. Similarly, the company agents (through their human resources department, for example) also conduct similar searches looking for opportunities to fire and hire employees so that their profits may be improved. 14

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

We are now ready to answer the questions: (i) Is there an equilibrium salary distribution? and (ii) If yes, what is it? There are two ways to answer these questions. One is by using a statistical mechanicsinformation theoretic framework, which is what I proposed in 2009-2010. 4,5 Another way is to use game theory as we proved in our 2015 paper, 6 which I will summarize next. I will then show the deep connection between the two.

4. Potential Game Formulation of Free Market Dynamics Before we jump into the technical details, let me summarize the key game theoretic concepts for non-experts. The problem we have from Section 2 is the following: Given a set of strategically interacting rational agents (called players), where each agent is trying to decide and execute the best possible course of actions that maximizes the agent’s outcome (called payoff or utility) in light of similar strategies executed by the other agents, we are interested in predicting what strategies will be executed and what outcomes are likely. In other words, what are the possible outcomes of the strategic behaviors of the employees and the company in our thought experiment? Game theory is a mathematical framework for answering this, by systematically analyzing and predicting the decision-making strategies, the dynamic behavior, and the likely payoffs of players, using models of conflict and cooperation. Just as probability theory originated from the study of pure gambling without strategic interaction, game theory originated from the formal study of strategic games, such as chess. The concept of an equilibrium, often called a Nash Equilibrium, whose existence or nonexistence can be predicted through a systematic analysis of the game, is one of the most fundamental results in game theory with far reaching implications in economics, sociology, political science, biology, and engineering. All these disciplines are riddled with situations where one is interested determining the eventual outcome of a set of players making strategic

15

ACS Paragon Plus Environment

Page 18 of 55

Page 19 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

decisions – one wants to know whether this ultimate outcome is an equilibrium state or not. This is exactly the question we have in our problem above. Interestingly, it is the same question one has when dealing with a large number of entities that are not interacting strategically, but randomly, such as molecules of a gas enclosed in a container. We want to know what the equilibrium state is for the gas and what criterion determines it. This problem is solved, as we know, by using statistical mechanics, which is founded on probability theory. As noted, probability theory originated from the study of games of chance whereas game theory originated from games of strategy. Since both mathematical frameworks define and determine an equilibrium state, it is natural to ask whether there is any connection between the two approaches. As it turns out, there is indeed a deep, beautiful, and hitherto unknown connection with surprising and useful insights as we shall see. Among the many classes of games, there is a class called population games that is particularly relevant to the situation described in our gedankenexperiment. This class studies games with a large number of players, as in our example. It provides an analytical framework for studying the strategic interactions of a population of agents with certain properties, as described by Sandholm 8 , which are satisfied in our gedankenexperiment. In games like Prisoner’s Dilemma, with a small number of players, the typical game theoretic analysis proceeds by systematically evaluating all the options every player has, determining their payoffs, writing down the payoff matrix, identifying the best responses of all players, identifying the dominant strategies (if present) for everyone, and then finally reasoning whether there exists a Nash Equilibrium (or multiple equilibria) or not. However, for population games, given the large number of players this procedure is not feasible. Fortunately, as it turns out, for some population games one can identify a single scalarvalued global function, called a potential, that captures the necessary information about payoffs. The gradient of the potential is the payoff or the utility. Such games are called potential games. 8–10 For such games, Nash Equilibirum is reached when the potential is

16

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 20 of 55

maximized. With this quick introduction to potential games, we now turn our attention back to our gedankenexperiment to answer the questions: Is there an equilibrium income distribution? If yes, what is it? Recall that I had already answered these questions, in 2009-2010, using a statistical mechanics-information theoretic approach. 4,5 But I knew intuitively that there should be a game theoretic solution as well. Since I had already solved this problem, I knew precisely what to look for in the game theoretic formalism. This gave me a tremendous advantage. For example, I knew that entropy had to play a central role in the game theoretic formalism I was seeking. I knew that there must be an entropy-like quantity that is maximized at equilibrium. One is usually not blessed with such good fortune of knowing so much about a problem one is trying to solve. So, when my collaborators in this study, Jay Sethuraman and Yu Luo, and I started the search for a game theoretic solution in late 2013, we zeroed in on potential games pretty quickly and knew that the potential here must correspond to entropy. This was a great insight that readily paved the way for the solution. We were quickly able to prove, in early 2014, that the equilibrium distribution was lognormal as I had predicted in my 2009-2010 papers. The math part was easy, just as I had anticipated, but the challenge was in the concepts – in deriving, interpreting, and explaining what the different terms in the utility function meant and so on. This part took me nearly three years, 2014-2017, and is explained more fully in my book. 7 So, using the potential game formalism, we have an employee’s utility as the gradient of potential φ(x), i.e.,

hi (x) ≡ ∂φ(x)/∂xi

(6)

where xi = Ni /N and x is the population vector. Therefore, by integration (we replace partial derivative with total derivative because hi (x) can be reduced to hi (xi ) expressed in Equations (1)-(4)), 17

ACS Paragon Plus Environment

Page 21 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

φ(x) =

Pn R

hi (x)dxi

i=1

(7)

We observe, using equation (5), that our game is a potential game with the potential function φ(x) = φu + φv + φw + constant

(8)

where



Pn

xi ln Si

φv = −β

Pn

xi (ln Si )2

φu

φw

=

i=1

γ N

i=1

ln Qn

N!

i=1 (N xi )!

(9) (10) (11)

where we have used Stirling’s approximation in equation (11). We can show that φ(x) is strictly concave: ∂ 2 φ(x)/∂x2i = −γ/xi < 0

(12)

Therefore, a unique Nash Equilibrium for this game exists, where φ(x) is maximized, as per the well-known theorem in potential games. 11 Thus, the self-organizing free market dynamics, where employees switch jobs, and companies switch employees, in search of better utilities or profits, ultimately reaches an equilibrium state, with an equilibrium income distribution. This happens when the potential φ(x) is maximized. This immediately raises two questions: (i) What is the equilibrium income distribution? and (ii) What is the economic meaning of the potential function?

18

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 22 of 55

4.1 Connection with Statistical Thermodynamics Before we answer these questions, let us reiterate the important observation we made earlier. As noted, readers familiar with statistical mechanics will recognize the potential component φw as entropy (except for the missing Boltzmann constant kB ). This identification suggests that maximizing the payoff potential in game theoretic equilibrium would correspond to maximizing entropy in statistical mechanical equilibrium, thereby revealing a deep and useful connection between these seemingly different conceptual frameworks. This connection suggests that one may view the statistical mechanics approach to molecular dynamics from a potential game perspective. In this approach, one may view the molecules as restless agents in a game (let’s call it the thermodynamic game), continually jumping from one energy state to another through intermolecular collisions. However, unlike employees who are continually driven to switch jobs in search of better utilities they desire, molecules are not teleological, i.e., not goal-driven, in their constant search. As "prisoners" of Newton’s Laws and conservation principles, constantly subjected to intermolecular collisions, their search and dynamical evolution is the result of thermal agitation.

4.2 What is the Equilibrium Income Distribution? We follow up on this intriguing connection first, before answering the question regarding the equilibrium income distribution. We, therefore, first answer "what is the equilibrium energy distribution" for the thermodynamic game, namely, the dynamical evolution of molecular microstates and the resultant thermodynamic macrostate. Approaching the thermodynamic game from the potential game perspective, we have the following “utility” for molecules in state i:

hi (Ei , Ni ) = −βEi − ln Ni

19

ACS Paragon Plus Environment

(13)

Page 23 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

where Ei is the energy of a molecule in state i, β = 1/kB T , kB = 1.38 × 10−23 JK−1 is Boltzmann constant; and T is temperature (Note: Ei is not to be confused with effort in equation (5)). By integrating the utility, we can obtain the potential of the thermodynamic game:

φ(x) = − where E = N

Pn

i=1

β 1 N! E + ln Qn N N i=1 (N xi )!

(14)

xi Ei is the total energy that is conserved.

We use the method of Lagrange multipliers with L as the Lagrangian and λ as the P Lagrange multiplier for the constraint ni=1 xi = 1: L = φ + λ(1 −

n X

(15)

xi )

i=1

Solving ∂L/∂xi = 0 and substituting the results back to

Pn

i=1

xi = 1, we obtain the

well-known Boltzmann exponential distribution at equilibrium: exp(−βEi ) xi = P n j=1 exp(−βEj )

(16)

What we just now did is the standard procedure followed in maximum entropy methods in statistical thermodynamics and information theory to identify the distribution that maximizes entropy under the given constraints 12–14 . Once again, readers familiar with statistical thermodynamics will recognize that from (14), we have:

φ=−

1 β (E − T S) = − A N kB T N

(17)

where A = E −T S is the Helmholtz free energy, S is entropy, and T is temperature (Note: S is not to be confused with salary in equation (5)). Indeed, in statistical thermodynamics A is called a thermodynamic potential. In this regard, we could also see the correspondence

20

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 24 of 55

between utility hi and chemical potential, the partial molar free energy. These valuable results, reproducing well-known equations in statistical thermodynamics, validate the soundness of our game theoretic framework and its direct correspondence with statistical thermodynamics. Crucially, maximizing game theoretic potential is the same as maximizing entropy, for they are one and the same, except for the Boltzmann constant multiplier. Following this logic, for the teleodynamic game (i.e., the income distribution game), we carry out the same procedure to maximize φ(x) in equations (8)-(11) to obtain the following lognormal distribution at equilibrium:

xi =

 

1  exp − Si D

ln Si −

α+γ 2β

2 

γ/β

 

(18)

where D = N exp [λ/γ − (α + γ)2 /4βγ] and λ is the Lagrange multiplier. It is important to note that even though our analysis is analogous to the one in statistical thermodynamics, the resulting distribution is different from that of the ideal gas. In the case of ideal gas molecules, the corresponding energy distribution is exponential, whereas the income distribution is lognormal. The difference is due to the differences in the objective functions and the constraints between the two systems.

4.3 Econophysics Models: Conceptual Difficulties It is important to recognize that while there is a close relationship between statistical thermodynamics and statistical teleodynamics, it is not appropriate to look for one-to-one correspondences for all kinds of properties and relations. For example, it is meaningless to look for the exact equivalents of pressure, volume, and temperature, or the Maxwell relations, in statistical teleodynamics. One might use certain terms loosely, such as competitive "pressure" and a "hot" market, but these are not to be taken literally. The connection between the two disciplines occurs at a deep, fundamental, level, and not through such superficial 21

ACS Paragon Plus Environment

Page 25 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

correspondences. This is a common drawback found in the typical econophysics approaches to the income distribution problem. 15–35 While these econophysics models are quite interesting and instructive, and they represent important progress in the field, they haven’t, however, bridged the rather wide conceptual gulf that exists between economics and econophysics, 36,37 particularly in two crucial areas. One, the typical particle model of agent behavior in econophysics assumes agents to have nearly "zero intelligence", acting at random, with no intent or purpose. This does not sit well with an extensive body of economic literature spanning several decades, where one models, in the ideal case, a perfectly rational agent whose goal is to maximize its utility or profit by acting strategically, not randomly. So, from the perspective of an economist, it is quite reasonable to ask "How can theories and models based on the collective behavior of purpose-free, random, molecules explain the collective behavior of goal-driven, optimizing, strategizing men and women?" Another major conceptual stumbling block is the role of entropy in economics. As Mirowski 38 observes: "The real impediments to the neoclassical embrace of thermodynamics are the concept of entropy and the implications of the second law." In statistical thermodynamics, equilibrium is reached when entropy, a measure of randomness or uncertainty, is maximized. So, an economist might wonder, why would maximizing randomness or uncertainty be helpful in economic systems? We all know that markets are stable, and function well, when things are orderly, with less uncertainty, not more. This has led to an uneasy relationship with entropy in economics, typically ranging from grudging tolerance to outright rejection, as seen from the remarks of two Nobel Laureates in Economics, Amartya Sen and Paul Samuelson. Sen observed, 39 while commenting on the Theil Index, an entropy-based measure of inequality: "given the association of doom with entropy in the context of thermodynamics, it may take a little time to get used to entropy as a good thing (‘How grand, entropy is on the increase!’), but it is clear that Theil’s ingenious measure has much to be commended. . . . But the fact remains that it is an arbitrary formula,

22

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

and the average of the logarithms of the reciprocals of income shares weighted by income shares is not a measure that is exactly overflowing with intuitive sense. It is, however, interesting that the concept of entropy used in the natural sciences can provide a measure of inequality that is not immediately dismissible, however arbitrary it may be.” Paul Samuelson was caustic and dismissive outright: 40 "As will become apparent, I have limited tolerance for the perpetual attempts to fabricate for economics concepts of ‘entropy’ imported from the physical sciences or constructed by analogy to Clausius-Boltzmann magnitudes." In his Nobel Lecture, he said: 41 "There is really nothing more pathetic than to have an economist or a retired engineer try to force analogies between the concepts of physics and the concepts of economics. How many dreary papers have I had to referee in which the author is looking for something that corresponds to entropy or to one or another form of energy?" On yet another occasion he issued this warning: 42 "And I may add that the sign of a crank or half-baked speculator in the social sciences is his search for something in the social system that corresponds to the physicist’s notion of "entropy"." These major conceptual hurdles have stranded neoclassical economics in the metaphors of classical mechanics for over a century, rather than it benefiting from the concepts and techniques of statistical thermodynamics which is a more appropriate analogue. The typical econophysics papers which use statistical mechanics-based approaches to economics have not addressed these central conceptual challenges. Besides these conceptual challenges, there is also a technical one due to the nature of the datasets in economics. As Ormerod 37 and Perline 43 discuss, one can easily misinterpret data from lognormal distributions, particularly from truncated datasets, as inverse power law or other distributions. Therefore, empirical verification of econophysics models is still in the early stages. Furthermore, most critically, econophysics models have not addressed the critical issues of fairness and morality.

23

ACS Paragon Plus Environment

Page 26 of 55

Page 27 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

5. Is the Equilibrium Distribution Fair? We recall from our gedankenexperiment that the spreading of the income distribution is driven by notions of fairness (or unfairness). But from the statistical thermodynamic, or the statistical teleodynamic, arguments we see that the spreading is driven by increasing entropy. How do we reconcile these two interpretations? What is the connection between entropy and fairness? The fairness question is a challenging one as the term fairness is frequently used quite broadly and can mean different notions in different contexts, as can be seen from the extensive literature on this subject. 7 This is one of the reasons the term is often used within quotes, as in "fair". We now show briefly that there is a deep and direct connection between entropy and fairness (for a more detailed discussion see 7 ). As we know, entropy is maximized at statistical equilibrium. We also know that the equivalent defining criterion for statistical equilibrium is the equality of all accessible phase space cells, which is the fundamental postulate of statistical mechanics. In other words, a given molecule is equally likely to be found in any one of the accessible phase space cells, at any given time. For example, consider a gas enclosed in a chamber. To say that a gas molecule, in the long run, is more likely to be found in the left half of the chamber than the right, and assign it a higher probability (say, p(left) = 0.6 and p(right) = 0.4), would be a biased and an unfair assignment of probabilities. This assignment presumes some information that has not been given. What does one know about the molecule, or its surrounding space, that would make one prefer the left chamber over the right? There is no basis for this preference. The unequal assignment of probabilities is thus arbitrary and unwarranted. Therefore, the fairest assignment of probabilities is one of equality of outcomes, i.e., p(left) = 0.5 and p(right) = 0.5. Therefore, by maximizing entropy one is, in fact, mathematically enforcing and guaranteeing the requirement that all accessible phase space cells are treated equally. In other words, this is the fairest treatment of all the 24

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

accessible phase space cells. Thus, we see that entropy really is a measure of fairness in the assignment of probabilities. We observe that maximizing entropy is the same as maximizing fairness in a distribution – i.e., the fairest treatment of all possible outcomes. Thus, the true essence of entropy is fairness, not randomness or disorder or uncertainty. This critical insight about entropy as fairness has not been explicitly recognized and emphasized in prior work in statistical mechanics, information theory, or economics 5 . Despite several attempts in the past

44–46

, entropy has played, by and large, only a marginal role in

economics, even that with strong objections from leading practitioners. Its pivotal role in economics and in free market dynamics has never been recognized in all these years. This is mainly because entropy’s essence as fairness is masked under different facets in different contexts 5 . Entropy is a tricky concept that has been often misunderstood, even maligned, in its 150 years existence. It is a historical accident that the concept of entropy was first discovered in the context of thermodynamics and, therefore, it has generally been identified as a measure of randomness or disorder. Boltzmann seems to have started this when he introduced the notion of a greater "disorder" of a gas at high temperature compared to its distribution of velocities at a lower temperature to describe its higher entropy intuitively 47 . Others followed suit – Helmholtz, in 1883, called entropy "Unordnung" (disorder) 48 , and Gibbs described entropy, somewhat informally, as “mixed-up-ness" 49,50 . All this, unfortunately, has resulted in entropy getting tainted with the negative notions of doom and gloom. Even its subsequent “rediscovery” by Claude Shannon 51 in the context of information theory did not help much, for entropy now got associated with uncertainty or ignorance, again not a good thing. For instance, Oxford English Dictionary (OED) defines entropy as “lack of order or predictability; gradual decline into disorder: e.g., ‘a marketplace where entropy reigns supreme’.” Such definitions only reaffirm the common misinterpretation of entropy as disorder and uncertainty. This misunderstanding of entropy has been a major conceptual stumbling block

25

ACS Paragon Plus Environment

Page 28 of 55

Page 29 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

in economics. The notion of entropy as disorder and, as Amartya Sen objected, 39 its association with doom and gloom is problematic in economics. However, as our analysis reveals, the true essence of entropy is fairness which appears with different masks in different contexts. In thermodynamics, as we just saw, being fair to all accessible phase space cells at equilibrium under the given constraints – i.e. assigning equal probabilities to all the allowed microstates – projects entropy as a measure of randomness or disorder 52 . This is a reasonable interpretation in this particular context, but it obscures the essential meaning of entropy as a measure of fairness. In information theory, being fair to all messages that could potentially be transmitted in a communication channel –i.e., assigning equal probabilities to all the messages – shows entropy as a measure of uncertainty 12,13 . Again, while this is an appropriate interpretation for this application, this, too, conceals the real nature of entropy. In the design of teleological systems, being fair to all potential operating environments, entropy emerges as a measure of robustness, i.e., maximizing system safety or minimizing risk 3 . Once again, this is the right interpretation for this domain, but this also hides its true meaning. Thus, when we enquire deeply into the nature of entropy, it reveals its true self, by lifting the veil of randomness and uncertainty, as a measure of fairness, which is a good thing, indeed a beautiful thing. It is not at all about doom and gloom that Sen objected to. The common theme across all these different contexts is the essence of entropy as a measure of fairness. This fairness property of entropy was hiding in plain sight. In a sense, people were using this property without explicitly recognizing and acknowledging it. It is important not to confuse entropy as a concept from physics even though it was first discovered there. In other words, it is not like energy or momentum, which are physics-based concepts. Entropy really is a concept in probability and statistics, an important property of distributions, whose application has been found to be useful in physics and information theory. In this regard, it is more like variance, which is a property of distributions, a

26

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

statistical property, with applications in a wide variety of domains. However, as a result of this profound, but understandable, confusion about entropy as a physical principle, one got trapped in the popular notions of entropy as randomness, disorder, doom or uncertainty, which have prevented people from seeing the deep and intimate connection between statistical theories of inanimate systems composed of non-rational entities (e.g., gas molecules in thermodynamics) and of animate, teleological, systems of rational agents seen in biology, economics, and sociology. This confusion has presented in the past an insurmountable obstacle in neoclassical economics as Mirowsky 38 observes: "As Bausor (1987, p. 9) put it, whereas classical thermodynamics "approaches entropic disintegration, [neoclassical theory] assembles an efficient socially coherent organization. Whereas one articulates decay the other finds constructive advance."" This obstacle goes away once we realize that entropy is a measure of fairness, not randomness, and that maximizing entropy promotes fairness in the economic system, thereby producing a self-organized, coherent, and harmonious organization of employees (which results in the lognormal income distribution), not "decay" or "entropic disintegration". In addition, and most crucially for economics, entropy’s natural connection with, and its central role in, the supply-demand driven, self-organizing, dynamics of a free market comprising of rational, utility and profit maximizing, animate agents has not been recognized and emphasized before. As our theory demonstrates, the ideal free market for labor promotes fairness as an emergent self-organized property. We believe that by properly recognizing entropy as a measure of fairness and demonstrating how it is naturally and intimately connected to the dynamics of free market, our theory makes a significant conceptual advance in revealing the deep and direct connection between game theory, statistical thermodynamics, information theory, and economics. This revelation of entropy’s true meaning also sheds new light on a decades-old fundamental question in economics, as Samuelson 41 posed in his Nobel Lecture, “What it is that Adam Smith’s "invisible hand" is supposed to be maximizing”, or as Monderer and Shap-

27

ACS Paragon Plus Environment

Page 30 of 55

Page 31 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

ley 53 stated regarding the potential function P ∗ in game theory, “This raises the natural question about the economic content (or interpretation) of P ∗ : What do the firms try to jointly maximize? We do not have an answer to this question.” Our theory suggests that what all the agents in a free market environment are jointly maximizing, i.e., what the “invisible hand” is maximizing, is fairness. Maximizing entropy, or game theoretic potential, is the same as maximizing fairness in economic systems, i.e., being fair to everyone under the given constraints. In other words, economic equilibrium is reached when every agent feels she or he has been fairly compensated for her or his efforts. As we all know, fairness is a fundamental economic principle that lies at the foundation of the free market system. It is so vital to the proper functioning of the markets, and for a democratic society, that we have regulations and watchdog agencies that break up and punish unfair practices such as monopolies, collusion, price gouging, and insider trading. Thus, it is eminently reasonable, indeed particularly reassuring, to find that maximizing fairness, i.e., maximizing entropy, is the condition for achieving economic equilibrium. We call this result the fair market hypothesis.

5.1 Equality of Utility at Equilibrium We have proved 7 that the average utility, h∗ , at equilibrium is given by:

h∗ = γ ln Z − γ ln N where Z =

Pn

j=1

(19)

exp{[α ln Sj − β(ln Sj )2 ]/γ} resembles the partition function seen in

statistical mechanics. At equilibrium, all agents enjoy the same utility or welfare, h∗ . Everyone is not making the same salary, of course, but they all enjoy the same utility. This is an important result, for it proves that all agents are treated equally with respect to the economic outcome, utility. This again confirms that this is the fairest outcome possible. This provides the moral justification, mathematically, for an ideal free market, because no one is 28

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

left behind. Not only everyone benefits, but also that everyone benefits to the same extent. In addition, we have proved elsewhere 7 that this is also the socially optimal outcome, that further justifies the moral foundations of an ideal free market economy.

6. Comparing Theory vs Reality So, how close are the real world free market societies to the ideal one? That is, how close are the real world income distributions to the ideal lognormal one? To answer this, we compare the prediction made by our theory with pre-tax income data, as reported by Piketty and his co-workers, for a dozen different countries. We propose a new measure of income inequality (discussed below), ψ, which quantifies how fair a given country is in sharing the total income pie among its citizens. ψ reveals which countries are closer to ideality, and which are not. Since ψ identifies the target (i.e., fairest inequality) quantitatively, we could use it to rationally formulate and fine tune tax & transfer policies that result in a more equitable society. While our theory is developed for modeling pay distributions in large corporations, its predictions may be compared with country-wide pre-tax income (excluding capital gains) data as most of it is an aggregate of the pay of individuals. In essence, we are approximating an entire country as a large corporation functioning in a free market environment. We compare our model’s predictions for the shares of the total income by three segments of the population (namely, bottom 90%, top 10-1% and top 1%) with those observed in different countries as reported by Piketty and his colleagues in their World Top Incomes (WTI) Database 54 . There are different statistical procedures and measures one can use to test the closeness of any two distributions, but we focus on the one that matters most – the amount of money in the income pie shared by different classes. We compare the income shares of the top 1%, top 10-1%, and the bottom 90% of the population in both cases, ideal vs real world. This is,

29

ACS Paragon Plus Environment

Page 32 of 55

Page 33 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

after all, what the income inequality debate is all about. We, of course, fully expect real-life free market societies to deviate from ideality, but we are interested in understanding how big the deviations are and why. While the income distributions should be lognormal in the different countries, if they were functioning ideally, they may have different means and variances depending on the minimum, average, and maximum annual income in the different countries, which determine the µ and σ of the corresponding lognormal distributions. As an example, we show these parameters in Figure 2 for the different countries. The minimum, average, and maximum income data are obtained from the WTI Database. For the maximum income, we chose to use the threshold income at 99.9%. At this cutoff, the area under the lognormal curve would correspond to 6σ. Strictly speaking, 6σ covers 99.73% of the population in general, but in our case it covers 99.865% as there is no one below the minimum income threshold. So, this is a very good approximation. Thus, we estimate σ by using the approximation

6σ ≈ ln(Maximum Income) − ln(Minimum Income) µ = ln(Average Income) −

σ2 2

(20) (21)

Once µ and σ are known, we can uniquely determine the corresponding lognormal distribution, and compute the income shares of the bottom 90%, top 10-1%, and the top 1%. Since µ and σ typically vary from year to year (because of the changes in the minimum, average, and/or maximum income from year to year), the ideal distribution of income to the bottom 90%, top 10-1%, and the top 1%, as predicted by the model, also shifts from year to year, though not by much. As an example, these values are displayed in Figure 2 for the twelve countries (which are commonly used examples 55 ), for the years shown.

30

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Country Year Currency Minimum Average Maximum

µ σ Gini Bottom 90% ideal Top 10%–1% ideal Top 1% ideal Bottom 90% data Top 10%–1% data Top 1% data Bottom 90% ψ Top 10%–1% ψ Top 1% ψ

Norway 2011 NOK 139,152 319,391 3,874,714 12.52 0.55 0.30 76.6% 19.5% 3.8% 71.7% 20.5% 7.8% −6.5% 5.1% 104.2%

Sweden 2012 SEK 159,630 249,760 2,580,353 12.32 0.46 0.26 79.3% 17.5% 3.1% 72.1% 20.8% 7.1% −9.1% 18.4% 128.1%

Denmark 2008 DKK 146,919 202,502 1,747,643 12.13 0.41 0.23 80.8% 16.5% 2.8% 73.8% 20.1% 6.1% −8.6% 22.2% 117.4%

Switzerland 2010 CHF 37,266 67,056 1,047,750 10.96 0.56 0.31 76.6% 19.6% 3.8% 66.5% 22.9% 10.6% −13.2% 16.7% 177.3%

Netherlands 1999 NLG 28,142 63,557 416,931 10.96 0.45 0.25 79.7% 17.2% 3.0% 71.9% 22.7% 5.4% −9.8% 31.7% 77.8%

Australia 2010 AUD 29,716 49,304 688,700 10.67 0.52 0.29 77.6% 18.9% 3.6% 69.0% 21.8% 9.2% −11.0% 15.7% 156.6%

Page 34 of 55

France 2006 EUR 15,885 26,872 366,769 10.06 0.52 0.29 77.6% 18.8% 3.6% 67.2% 23.9% 8.9% −13.4% 26.7% 150.5%

Germany 2008 EUR 17,927 29,826 630,374 10.13 0.59 0.33 75.4% 20.4% 4.2% 60.5% 25.6% 13.9% −19.8% 25.6% 234.3%

Japan 2010 K JPY 1,236 2,255 32,608 7.57 0.55 0.30 76.9% 19.3% 3.7% 59.5% 31.0% 9.5% −22.6% 60.3% 153.8%

Canada 2000 CAD 11,036 24,859 528,156 9.91 0.64 0.35 73.8% 21.6% 4.6% 58.9% 28.0% 13.2% −20.2% 29.7% 184.3%

UK 2011 GBP 11,324 19,217 365,130 9.70 0.58 0.32 75.9% 20.1% 4.0% 60.9% 26.2% 12.9% −19.8% 30.5% 221.0%

US 2013 USD 15,131 52,619 1,424,300 10.58 0.76 0.41 70.0% 24.2% 5.8% 53.0% 29.5% 17.5% −24.3% 21.9% 200.7%

Figure 2: Model predictions and sample empirical data for different countries (adapted from Venkatasubramanian et al. 6 )

6.1 A New Measure of Fairness in Income Inequality – ψ Now, we know from empirical data that the free market economies of the Scandinavian countries are generally fairer in the economic treatment of all their citizens, not just the wealthy ones. We also know that US does not do as well. We further know that other Western European countries such as France, Germany, and Switzerland, are somewhere in between. Can our theory predict these outcomes? That is, just by knowing only the minimum, average and maximum incomes in a free market society, can the model identify how fair these societies are? To test the model along these lines, we define a new index of inequality that uses the ideal lognormal distribution (one that is appropriate for the country under consideration) as the reference. This new measure, which we call Nonideal Inequality Coefficient, ψ, is defined as:

ψ=

Actual share − 100% Ideal share

(22)

ψ measures the level of non-ideal inequality in the system. When ψ is zero, the system has the ideal level of inequality, the fairest income inequality. When ψ is small, the level of

31

ACS Paragon Plus Environment

Page 35 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

inequality is almost ideally fair; when it is large, the inequality is more extreme and unfair. We computed the predicted income shares for the different countries for the period from ∼1920 to ∼2012, depending on the availability of the data in the WTI Database. We then computed ψ for the three segments (bottom 90%, top 10-1%, top 1%) for these countries, annually, for the corresponding time periods (sample values are shown in Figure 2). These are plotted in Figure 3. If a country was functioning as an ideal free market system, as defined by our theory, the corresponding ψ for the three segments would all be exactly zero. This is the reference line which is shown as the 0% line (black dotted line) in the plots. As one can see, the theory’s predictions are in general agreement with what is known about these countries regarding their inequalities. The twelve countries are shown, roughly, in the order of generally increasing inequality according to our model. Our objective here is not to rank them in a strict order but to show how the different countries have deviated from ideality.

32

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Figure 3: Global income inequality trends over the years: ψ - Ideal vs Reality. Blue line: bottom 90%; green line: top 10%-1%; red line: top 1%; black dashed line: 0% ideal reference. (Adapted from Venkatasubramanian et al. 6 )

6.2 Scandinavia: Almost ideal Let’s examine what these charts inform us. As we expected, none of them are ideal, but there are some pleasant surprises. Consider Norway as an example. Its bottom 90% and top 10-1% income shares are remarkably close to the ideal values over the last ∼20 years. Its bottom 90% ψ has steadily improved, from a low of about -20% in 1929 to about -2% in 1993, and has been ∼5-10% below the ideal value in the last ∼20 years. Similarly, its top 10-1% ψ has come down from a high of ∼40% in 1968 to ∼5% in 2011. In fact, during 33

ACS Paragon Plus Environment

Page 36 of 55

Page 37 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

∼1991-2011, it has been hugging the ideal line quite closely, sometimes a little bit above and sometimes a little below, typically within a narrow ±6% band. As for the top 1%, ψ has steadily improved towards the ideal value over a period from ∼1929 (at ∼160%) to ∼52% in ∼1990. After spiking up in 2005, it has come down to ∼104% in 2011. But, clearly, top 1%’s share is much more nonideal than that of the bottom 99%. We find similar close-to-ideality trends in Sweden, Denmark, and Switzerland, for the bottom 90% and top 10-1%. In these countries, typically, the bottom 90% ψ is within ∼10% of the ideal value; the top 10-1% ψ is within ∼15-25%. All these countries, which practice free market economies, are generally known to be fairer in their economic treatment of all their citizens. But how fair are they? We could not answer this question before as there was no reference to compare with, but now we can as our theory provides such a benchmark, the lognormal distribution. We find it remarkable that the income sharing these countries have accomplished for their bottom 90% and top 10-1% are so close to the ideal distributions. We find this to be a surprising result because we didn’t expect any real-world economic system to come this close to ideality given the simplifying assumptions and approximations in our model. For an overwhelming majority of the population (∼99%), these countries have achieved a near-ideal degree of fairness, presumably through an enlightened combination of individual, corporate and societal values, and macroeconomic policies, all executed through the free market mechanism. Clearly, this did not happen quickly and it took some time, as the trends show, but it is encouraging to find that through the mistakes made and lessons learnt, societies can evolve and adapt to "discover" a near-ideal distribution, given a chance through the political process. In addition, what is even more remarkable is that these free market economies did not know, a priori, what the ideal, theoretically fairest, distribution was, and yet they seem to have "discovered" and maintained a near-ideal outcome empirically on their own. While these agreements with the aggregate model predictions are encouraging, more thorough

34

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

studies are needed using detailed distribution data to validate these initial impressions and understand the comparisons more rigorously. From the charts, it appears that Netherlands, Australia and France are broadly in the same general class of higher inequality compared to the first group. Next group is Germany, Japan and Canada, and, finally, UK and US are about the same. It is curious, though, that Japan shows a much higher share for the 10-1%, compared with even the U.S. or U.K.. It will be interesting to understand why and how this happens in Japan. Again, our objective here is not to rank these countries in any strict order, but to show how different countries deviate from ideality.

7. Statistical Teleodynamics: Macro and Micro Views We now provide a formal statement of our theory in two related, but different, perspectives: the Macro and Micro views. We first present the Macro view, which focuses on the design specifications of the overall system (i.e., a free-market society), its goal(s), and the associated constraints imposed by the environment. This is the information theoretic perspective of the system. A more extensive discussion on statistical teleodynamics and its relation to different design aspects of complex teleological systems, such as network topology, can be found here 3 . The Macro view is expressed as four postulates. These would correspond to similar postulates in thermodynamics and, in fact, the former set reduces to the latter for the case of purpose-free entities such as molecules, as we discuss below. Building on our earlier work 3,5 , we list the fundamental postulates of statistical teleodynamics: 1. Macro Survival Postulate: A teleological system is designed, or evolved, to meet its survival objective(s) Ξ(ξ1 , ξ2 , ξ3 , . . . ) in its operating environment, where Ξ is the overall fitness for survival, which is a function of a number of survival related parameters ξ1 , ξ2 , ξ3 , etc. 2. Performance Target Postulate: The operating environment imposes both qualitative 35

ACS Paragon Plus Environment

Page 38 of 55

Page 39 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

and quantitative performance constraint(s) Θ(θ1 , θ2 , θ3 , . . . ) that the teleological system must satisfy for its survival. In order to survive, the teleological system must execute various functions at some targeted levels of performance or fitness (Θ) which are the design constraint(s) (e.g., hΞi = Θ) demanded by the environment. For instance, if a corporation only delivers a ROI of 5% in a market environment where the average is 15%, it won’t survive. It will likely be acquired, or taken over and stripped of its assets and sold, and so on. In this example, Ξ is ROI and Θ = 15%. Thus, the environment determines what matters (e.g., ROI) and how much (e.g., 15%) for a system’s survival. That is, the environment imposes both qualitative (i.e., what matters) and quantitative (i.e., how much) constraints. These might be different for different market segments. In essence, this postulate states that the environment demands Pareto Optimality from the system for its survival, emphasizing the need for an efficient performance. 3. Optimally Robust Design Postulate: A teleological system is designed, or evolved, to be optimally robust in order to meet its survival objective(s) in all potential operating environments. Since the nature of future operating environments in which a teleological system must survive is typically unknown, often unknowable, and hence uncertain, the system must be designed (or evolved) in such a way that it will able to execute all its functions, achieve its performance targets, and meet its survival objective(s) in the widest set of possible operating environments. In other words, a company, for instance, should be able to weather a wide variety of market conditions, survive and grow. That is, the company should be stable and functional even under disruptions. Similarly, one would prefer to live in a society that has demonstrated its ability to survive and thrive, over a long time, despite all kinds 36

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

of economic, social, political turmoil. Such a society is optimally robust or optimally stable. This postulate ensures that the system is not fragile while meeting the demands of postulates 1 and 2. The first two do not ensure survival under a wide variety of conditions. You might consider the first two postulates as a minimum requirement, as necessary conditions, imposed on the system. But in order to survive over a long time, under a variety of threats, it needs to satisfy the third postulate as well. 4. Maximum Entropy Design Postulate: The overall design that maximizes entropy subject to the design criteria in (2) is the optimally robust design of the teleological system, where the optimality is with respect to all potential operating environments. The maximum entropy design is optimal because this design accommodates maximum uncertainty about the future operating environments. Under entropy maximization, the system’s performance is optimized to accommodate a wide variety of future operating conditions whose nature is unknown, unknowable and hence uncertain. Hence, one designs by allowing for maximum uncertainty, which is what maximum entropy does. This is what we mean by an optimally robust design. For the engineering context, we have utilized the interpretation of entropy as a measure of uncertainty in analyzing the stability of the system. When the system is a corporation or a free market society, with utility-maximizing fairness-seeking agents, its stability is determined by the fairness in the system as we saw in our gedankenexperiment. In fact, the lognormal distribution can be proven to be asymptotically stable by using the Lyapunov method. For a more complete discussion the reader is referred to my book. 7 In our earlier papers, 2,3 we have discussed the theory’s predictions of vertex degree distributions for three classes of complex teleological networks commonly seen in a wide variety of domains such as engineering, biology, sociology, economics, and so on. For networks that 37

ACS Paragon Plus Environment

Page 40 of 55

Page 41 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

have nodes with, on an average, a small number of connections (i.e., low vertex degree), it predicts the exponential distribution of vertex degree. For networks with nodes that, on an average, have a large number of connections (i.e., high vertex degree), it predicts a power law distribution. Finally, for a third class of networks that have nodes with extremely large number of connections, the theory predicts the Poisson distribution. All these are confirmed by empirical observations and simulations. These three classes can also appear as three regimes in a given network as the network moves from a sparsely connected state to reasonably redundant to extremely redundant in the number of links. The existence of these three regimes as a function of redundancy has been reported in simulations by Guimera et. al. 56 and Venkatasubramanian et. al. 1 but without a coherent theoretical explanation for their occurrence. Because most real-world networks are designed to be reasonably redundant to make them more efficient and robust, they all typically fall in the middle regime and hence the apparent ubiquitous occurrence of power laws and their seeming universality. Thus, the theory correctly predicts the emergence of exponential, scale-free, and Poisson distributions as seen in many practical applications. In addition, because this theory formulates the design purpose explicitly and precisely, it is able to address one of the most important questions in complex networks organization, namely, how the local properties of a network such as the vertex degree are determined by, and conversely determine, the global or system-level properties such as the overall survival or performance criteria. Thus, this theory offers crucial insights about why these distributions emerge and how they are related to the survival objective(s) of the overall system, thereby presenting us with a deeper, more fundamental understanding of the topology of complex networks. For non-teleological, purpose-free, entities, such as the ones seen in thermodynamics (e.g., gas molecules in a pressure vessel), the first postulate is not relevant as there is no survival objective targeted by the entities. Similarly, the third postulate is also not applicable. The second postulate regarding the constraints imposed by the environment simply reduces to the

38

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

conservation of energy requirement, which, of course, is the first law of thermodynamics. The fourth postulate also simplifies to the conventional maximum entropy requirement, which is the second law of thermodynamics. Thus, we recover the conventional thermodynamic laws as a limiting case of statistical teleodynamics for purpose-free thermodynamic systems. Again, this internal consistency is reassuring.

7.1 Statistical Teleodynamics: Micro View As opposed to the Macro view, which is the design perspective, the Micro view emphasizes the operations perspective of the system, focusing on the individual agents, their goals, interactions, and dynamical evolution. This is the game theoretic perspective we discussed earlier.

The two fundamental postulates of the Micro view are:

1. Micro Survival Postulate: The goal of an individual agent is to survive and grow in a competitive environment, populated by other such agents, by continually taking actions to maximize its utility. 2. Equilibrium Postulate: There are a number of ways of stating this postulate and we list two versions that are appropriate for the problem at hand:

A teleodynamic system of many competing rational agents reaches equilibrium when fairness in utility distribution is maximized.

Another way to state this postulate is:

A teleodynamic system of many competing rational agents reaches equilibrium when every agent enjoys the same level of utility.

39

ACS Paragon Plus Environment

Page 42 of 55

Page 43 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

The first postulate is simply a statement of the well-known Homo Economicus model commonly used in economic theories. It states what motivates the motion of agents in the phase space. The rational agents switch from one job to another driven by their desire to maximize utility – i.e., exhibit teleodynamic (i.e., goal-driven) behavior. The second postulate states when the movements come to a "stop", i.e., a dynamic statistical equilibrium. As we discussed, at equilibrium the potential φ, i.e., fairness, is maximized. Thus, this postulate is the equivalent of the fundamental postulate in statistical mechanics, the equal a priori probability postulate. In statistical mechanics, we don’t have the equivalent of the first postulate because it is assumed that molecules are constantly in motion driven by thermal agitation. Thus, both the Macro and Micro views of statistical teleodynamics describe the same phenomenon of statistical equilibrium from two different, but related, perspectives, arriving at the same conclusions, both mathematically and intuitively. As we have noted several times, the connection between the two views occur through the principle of maximizing fairness modeled by the game theoretic potential and entropy.

8. Summary As science continues to face the challenges of the 21st century, the most successful research program of the 20th century, reductionism, seems unable to address many of them. This is because the challenges inherently require an opposite approach, the constructionist or emergentist paradigm. We need a quantitative, analytical, theory of emergence that can make testable predictions to address these challenges. Toward that end, I have proposed a new theory, a parts-to-whole systems-oriented theory, called statistical teleodynamics, which, I hope, is a useful step in the long journey ahead. Statistical teleodynamics is a mathematical framework that unifies three seemingly disparate domains - purpose-free entities in statistical mechanics, human engineered teleological

40

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

systems in systems engineering, and nature-evolved teleological systems in biology and sociology - within the same conceptual formalism expressed as four postulates in the Macro view or as two postulates in the Micro view. These reduce to the familiar postulates of statistical thermodynamics and statistical mechanics in the limiting case of purpose-free entities. This conceptual framework, expressed mathematically, is essentially a unification of the perspectives of Gibbs-Boltzmann in statistical mechanics, Shannon in information theory, Darwin in biology, Nash-Shapley in game theory, and Smith-Mill-Rawls-Nozick in political philosophy. This universality of bridging across seemingly disparate domains is quite appealing. The fundamental principle that lies at the core of this unification is equality expressed as the principle of maximum fairness. This theory rests on three key conceptual insights. One is that the concept of entropy from statistical mechanics and information theory is the same as the potential from game theory, thus making it possible to unify theories of purpose-free and purpose-driven entities. The second is the insight that entropy models mathematically the concept of fairness in economics and philosophy or, equivalently, the concept of robustness in systems engineering. This facilitates integrating fundamental principles of political philosophy, such as equality and liberty, with those of economics, in a mathematical framework. The interpretation of entropy as a measure of robustness is critical to the unification of the systems engineering objectives of a teleological system with the postulates of statistical mechanics. The third insight is that when one maximizes fairness, all agents enjoy the same level of utility at equilibrium in an ideal free-market society. Alternately, in the context of systems engineering, by maximizing entropy subject to the given constraints, one is producing an optimally robust design, which is what one would desire. These insights, captured in the postulates of the Macro and Micro perspectives of statistical teleodynamics, help us prove that the fairest inequality of income is a lognormal distribution, at equilibrium, under ideal conditions. One may view our result as an "economic law" in the statistical thermodynamics sense. The ideal free market, guided by Adam

41

ACS Paragon Plus Environment

Page 44 of 55

Page 45 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

Smith’s "invisible hand", can self-organize to discover and obey this "economic law" if allowed to function freely without rent seeking, market power exploitation, or other such unfair interferences subverting the free-market mechanism. This result is the economic equivalent of the Gibbs-Boltzmann exponential distribution in statistical thermodynamics. Similarly, the same postulates help us also predict the emergence of the three classes of network organization - exponential, scale-free, and Poisson - seen in a variety of domains. Thus, the same theoretical framework explains and predicts the emergence of certain features of the organization and behavior of systems of purpose-free entities and teleological systems, both human engineered and naturally evolved. The surprising insight of the equality of utility for all workers is a particularly valuable result as it proves that the ideal free-market society is intrinsically or inherently fair. What can be fairer than treating all workers equally? This insight answers a fundamental open question that has challenged free-market societies for centuries: how much economic inequality is fair? This question is particularly relevant now in the U.S. and elsewhere, given the recent trends in extreme inequality in income and wealth. Consider the U. S. for example. Between 1980 and 2015, the U.S. GDP increased by about 150% (in constant 2010 dollars), but wages for the bottom 90% of the population grew by only 15% in constant dollars. The wage growth for the top 1% during the same period was 138%. Most of the productivity gains and GDP growth were captured by the top 10%, particularly the top 1%. The bottom 90% lost out to automation, outsourcing, free trade, and unfair governmental policies. Free-marketers argue this is fair, as it is the result of technological progress and competition, and therefore to be expected and accepted in a free-market society. Others argue that this is also due to labor’s loss of bargaining power, increasingly powerful corporate lobbies, unfair tax policy, and other such factors, in addition to technology and competition. The fundamental question is whether it is fair for the top 10% to reap most of the fruits of growth while the bottom 90% made sacrifices to make this growth happen. Can we have a viable

42

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

society without the bottom 90% doing their part? If the bottom 90% don’t cooperate and participate in the free market process, there will be no democracy. A free market democratic society functions properly because of the implicit social contract that everyone will benefit by participating and contributing. Extreme economic inequality poses systemic risk to the stability of a democratic, free-market, society. So, the central question then is what is a fair compensation to the bottom 90% to offset the negative effects of automation, outsourcing, and free trade on them? There is no answer to this fundamental question in neoclassical economics, but there is in statistical teleodynamics. Recall that, at equilibrium, when income is distributed lognormally, everyone enjoys the same utility. Since everyone is rewarded equally, this is the fairest outcome possible. Therefore, if we were to take the Declaration of Independence and the U. S. Constitution seriously, and really believe in and practice the social contract of equality, then the bottom 90% ought to be compensated through taxes and transfers such that the resulting income distribution, post-taxes and post-transfers, is lognormal, thereby ensuring equality. Doing so will restore the intrinsic fairness of the ideal free market by making everyone’s utility the same. This has important implications for tax policy and corporate governance. My theory doesn’t, of course, address the emergence of consciousness and other examples I mentioned in the beginning, but it isn’t meant to. Even though I have been interested in these bigger questions, my goal is more limited, limited to gaining an understanding of how statistical thermodynamics could be generalized to include rational agents. The unexpected result of this pursuit is that the new theory answers one of the major open questions in economics, which is a pleasant surprise. It also answers certain questions regarding the emergence of macroscopic topological features in networks. However, as a mathematical theory of emergence, statistical teleodynamics is only a small step toward a more comprehensive theory of emergence, the true Theory of Everything. I believe that such a theory is likely to be a synthesis of several fundamental concepts, principles, and mechanisms from different disciplines as statistical teleodynamics has been. In that sense, statistical teleody-

43

ACS Paragon Plus Environment

Page 46 of 55

Page 47 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

namics could be a useful model as we continue the search for a more comprehensive theory of emergence. In addition to the fields considered in statistical teleodynamics, artificial intelligence, neurobiology, and psychology, I believe, will play a crucial role in a theory of consciousness. Many believe that consciousness is largely a biological problem. I don’t think so. I think it really is a problem at the intersection of artificial intelligence, systems engineering, statistical mechanics, game theory, and psychology. This is where the conceptual challenges lie. The implementation of this scheme is in biology, which can provide valuable insights. But the conceptual breakthrough is likely to come from a synthesis of these other domains. Do we need new mathematical tools to accomplish this? Probably not. I believe we already have the math we might need, as it has been in the past discoveries. The challenges are mainly conceptual, as before. I believe statistical teleodynamics is the natural generalization of statistical thermodynamics, the most successful parts-to-whole systems theory we have at the present. But statistical teleodynamics is only a small step toward a more comprehensive mathematical theory of emergence that can also answer questions about the emergence of hierarchical morphologies, and of phenomena like self-awareness, in complex adaptive systems. It is indeed a long road ahead, and it’s been a long road for me already, one that is less traveled by, but the journey promises to be intellectually exciting and rewarding, as it has been for me already.

Acknowledgement I would like to express my deep gratitude to Professor Keith Gubbins for serving as my doctoral mentor and for introducing me to statistical thermodynamics. He was kind enough to let me explore my wide-ranging intellectual interests during my doctoral study, even though he knew that I was perhaps spending more time on these than on my own thesis

44

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

topic. I also thank my students and collaborators over the years who have contributed to this work in one form or the other. This work is supported in part by the Center for the Management of Systemic Risk (CMSR) at Columbia University.

Notes and References (1) Venkatasubramanian, V.; Katare, S.; Patkar, P. R.; Mu, F.-p. Spontaneous emergence of complex optimal networks through evolutionary adaptation. Computers & chemical engineering 2004, 28, 1789–1798. (2) Venkatasubramanian, V.; Politis, D. N.; Patkar, P. R. Entropy maximization as a holistic design principle for complex optimal networks. AIChE journal 2006, 52, 1004– 1009. (3) Venkatasubramanian, V. A theory of design of complex teleological systems: Unifying the Darwinian and Boltzmannian perspectives. Complexity 2007, 12, 14–21. (4) Venkatasubramanian, V. What is fair pay for executives? An information theoretic analysis of wage distributions. Entropy 2009, 11, 766–781. (5) Venkatasubramanian, V. Fairness is an emergent self-organized property of the free market for labor. Entropy 2010, 12, 1514–1531. (6) Venkatasubramanian, V.; Luo, Y.; Sethuraman, J. How much inequality in income is fair? A microeconomic game theoretic perspective. Physica A: Statistical Mechanics and its Applications 2015, 435, 120–138. (7) Venkatasubramanian, V. How Much Inequality is Fair? Mathematical Principles of a Moral, Optimal, and Stable Capitalist Society; Columbia University Press, 2017. (8) Sandholm, W. H. Population games and evolutionary dynamics; MIT press, 2010. 45

ACS Paragon Plus Environment

Page 48 of 55

Page 49 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

(9) Rosenthal, R. W. A class of games possessing pure-strategy Nash equilibria. International Journal of Game Theory 1973, 2, 65–67. (10) Easley, D.; Kleinberg, J. Networks, crowds, and markets: Reasoning about a highly connected world ; Cambridge University Press, 2010. (11) Ref. 8, p. 60. (12) Jaynes, E. T. Information theory and statistical mechanics. Physical review 1957, 106, 620. (13) Jaynes, E. T. Information theory and statistical mechanics. II. Physical review 1957, 108, 171. (14) Kapur, J. N. Maximum-entropy models in science and engineering; John Wiley & Sons, 1989. (15) Foley, D. K. A Statistical Equilibrium Theory of Markets. Journal of Economic Theory 1994, 62, 321 – 345. (16) Levy, M.; Solomon, S. Power laws are logarithmic Boltzmann laws. International Journal of Modern Physics C 1996, 7, 595–601. (17) Foley, D. K. STATISTICAL EQUILIBRIUM IN A SIMPLE LABOR MARKET. Metroeconomica 1996, 47, 125–147. (18) Stanley, H. E.; Amaral, L. A. N.; Canning, D.; Gopikrishnan, P.; Lee, Y.; Liu, Y. Econophysics: Can physicists contribute to the science of economics? Physica A: Statistical Mechanics and its Applications 1999, 269, 156–169. (19) Foley, D. K. Statistical Equilibrium in Economics: Method, Interpretation, and an Example. 1999.

46

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

(20) Bouchaud, J.-P.; Mézard, M. Wealth condensation in a simple model of economy. Physica A: Statistical Mechanics and its Applications 2000, 282, 536–545. (21) Dragulescu, A.; Yakovenko, V. M. Statistical mechanics of money. The European Physical Journal B-Condensed Matter and Complex Systems 2000, 17, 723–729. (22) Souma, W. Universal structure of the personal income distribution. Fractals 2001, 9, 463–470. (23) Willis, G.; Mimkes, J. Evidence for the independence of waged and unwaged income, evidence for Boltzmann distributions in waged income, and the outlines of a coherent theory of income distribution. arXiv preprint 2004, cond-mat. (24) Farmer, J. D.; Smith, E.; Shubik, M. Economics: the next physical science? Physics Today 2005, 58, 37–42. (25) Richmond, P.; Hutzler, S.; Coelho, R.; Repetowicz, P. A review of empirical studies and models of income distributions in society; Wiley-VCH: Berlin, Germany, 2006. (26) Chatterjee, A.; Sinha, S.; Chakrabarti, B. K. Economic inequality: Is it natural? Current Science 2007, 92, 1383–1389. (27) Chatterjee, A.; Yarlagadda, S.; Chakrabarti, B. K. Econophysics of wealth distributions: Econophys-Kolkata I ; Springer Science & Business Media, 2005. (28) Smith, E.; Foley, D. K. Classical thermodynamics and economic general equilibrium theory. Journal of economic dynamics and control 2008, 32, 7–65. (29) Yakovenko, V. M.; Rosser Jr, J. B. Colloquium: Statistical mechanics of money, wealth, and income. Reviews of Modern Physics 2009, 81, 1703. (30) Foley, D. K. What’s wrong with the fundamental existence and welfare theorems? Journal of Economic Behavior & Organization 2010, 75, 115 – 131. 47

ACS Paragon Plus Environment

Page 50 of 55

Page 51 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

(31) Willis, G. Wealth, income, earnings and the statistical mechanics of flow systems; 2011. (32) Yakovenko, V. M. Statistical mechanics approach to the probability distribution of money. New Approaches to Monetary Theory: Interdisciplinary Perspectives; Ganssmann, H., Ed 2012, 104–123. (33) Chakrabarti, B. K.; Chakraborti, A.; Chakravarty, S. R.; Chatterjee, A. Econophysics of income and wealth distributions; Cambridge University Press, 2013. (34) Cho, A. Physicists say it’s simple. Science 2014, 344, 828–828. (35) Axtell, R. Endogenous Dynamics of Firms and Labor with Large Numbers of Simple Agents. 2014; http://www.lem.sssup.it/WPLem/documents/axtell.pdf. (36) Gallegati, M.; Keen, S.; Lux, T.; Ormerod, P. Worrying trends in econophysics. Physica A: Statistical Mechanics and its Applications 2006, 370, 1–6. (37) Ormerod, P. The current crisis and the culpability of macroeconomic theory. TwentyFirst Century Society 2010, 5, 5–18. (38) Mirowski, P. More heat than light: Economics as social physics, physics as nature’s economics; Cambridge University Press, 1989. (39) Sen, A.; Foster, J. E. On economic inequality. Oxford University Press 1997, 35–36. (40) Samuelson, P. A. Gibbs in economics. Proceedings of the Gibbs Symposium. 1990; pp 255–267. (41) Samuelson, P. A. Maximum principles in analytical economics. The American Economic Review 1972, 249–262. (42) Samuelson, P. In The Collected Scientific Papers; Merton, R., Ed.; MIT Press, Cambridge, MA, 1972; Vol. 3.

48

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

(43) Perline, R. Strong, weak and false inverse power laws. Statistical Science 2005, 20, 68–88. (44) Georgescu-Roegen, N. The entropy law and the economic process in retrospect. Schriftenreihe des IÖW 1987, 5, 87. (45) Proops, J. L.; Safonov, P. Modelling in ecological economics; Edward Elgar Publishing, 2004. (46) Baumgartner, S. Thermodynamic Models: Rationale, Concepts, and Caveats. 2005. (47) Boltzmann, L. Lectures on gas theory; Univ of California Press, 1964. (48) von Helmholtz, H. Wissenschaftliche Abhandlungen; JA Barth, 1883; Vol. 2. (49) Gibbs, J. W. The collected works of J. Willard Gibbs; Yale University Press, 1928; Vol. 1. (50) Lambert, F. L. Disorder-A cracked crutch for supporting entropy discussions. Journal of Chemical Education 2002, 79, 187. (51) Shannon, C. E. A note on the concept of entropy. Bell System Tech. J 1948, 27, 379–423. (52) Reif, F. Fundamentals of statistical and thermal physics; McGraw-Hill, 1965. (53) Monderer, D.; Shapley, L. S. Potential games. Games and economic behavior 1996, 14, 124–143. (54) Alvaredo, F.; Atkinson, A. B.; Piketty, T.; Saez, E. The World Top Incomes Database (top 10%, top 1% income shares, P99.9 threshold, average income per tax unit, bottom 90% average income, and top 0.5% average income; Australia, Canada, Denmark, France, Germany, Japan, Netherlands, Norway, Sweden, Switzerland, UK, and US). 2015; http://topincomes.g-mond.parisschoolofeconomics.eu/. 49

ACS Paragon Plus Environment

Page 52 of 55

Page 53 of 55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Langmuir

(55) Piketty, T.; Saez, E. Income Inequality in the United States, 1913-1998. Quarterly Journal of Economics 2003, 118, 1–41. (56) Guimera, R.; Diaz-Guilera, A.; Vega-Redondo, F.; Cabrales, A.; Arenas, A. Optimal network topologies for local search with congestion. Physical review Letters 2002, 89, 1–4.

50

ACS Paragon Plus Environment

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 ACS Paragon Plus Environment

Page 54 of 55

Page 55 of 55

Langmuir

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 ACS Paragon Plus Environment