Posts Tagged ‘entropy’

tp01

A thermodynamic system doesn’t have to be big. Although thermodynamics was originally concerned with very large objects like steam engines for pumping out coal mines, thermodynamic thinking can equally well be applied to very small systems consisting of say, just a few atoms.

Of course, we know that very small systems play by different rules – namely quantum rules – but that’s ok. The rules are known and can be applied. So let’s imagine that our thermodynamic system is an idealized solid consisting of three atoms, each distinguishable from the others by its unique position in space, and each able to perform simple harmonic oscillations independently of the others. At the absolute zero of temperature, the system will have no thermal energy, one microstate and zero entropy, with each atom in its vibrational ground state.

Harmonic motion is quantized, such that if the energy of the ground state is taken as zero and the energy of the first excited state as ε, then 2ε is the energy of the second excited state, 3ε is the energy of the third excited state, and so on. Suppose that from its thermal surroundings our 3-atom system absorbs one unit of energy ε, sufficient to set one of the atoms oscillating. Clearly, one unit of energy can be distributed among three atoms in three different ways – 100, 010, 001 – or in more compact notation [100|3].

Now let’s consider 2ε of absorbed energy. Our system can do this in two ways, either by promoting one oscillator to its second excited state, or two oscillators to their first excited state. Each of these energy distributions can be achieved in three ways, which we can write [200|3], [110|3]. For 3ε of absorbed energy, there are three distributions: [300|3], [210|6], [111|1].

Summarizing the above information

Energy E (in units of ε) Total microstates W Ratio of successive W’s
0 1
1 3 3
2 6 2
3 10 1⅔

 

The summary shows that as E increases, so does W. This is to be expected, since as W increases, the entropy S (= k log W) increases. In other words E and S increase or decrease together; the ratio ∂E/∂S is always positive. Since ∂E/∂S = T, the finding that E and S increase or decrease together is equivalent to saying that the absolute temperature of the system is always positive.

– – – –

Adding an extra particle

It is instructive to compare the distribution of energy among three oscillators (N =3)*

E = 0: [000|1]
E = 1: [100|3]
E = 2: [200|3], [110|3]
E = 3: [300|3], [210|6], [111|1]

with the distribution among four oscillators (N = 4)*

E = 0: [0000|1]
E = 1: [1000|4]
E = 2: [2000|4], [1100|6]
E = 3: [3000|4], [2100|12], [1110|4]

*For any single distribution among N oscillators where n0, n1,n2 … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

tp02

It is understood that 0! = 1. Derivation of the formula is given in Appendix I.

For both the 3-oscillator and 4-oscillator systems, the first excited state is never less populated than the second, and the second excited state is never less populated than the third. Population is graded downward and the ratios n1/n0 > n2/n1 > n3/n2 are less than unity.

Example calculations for N = 4, E = 3:

tp03

tp04

tp05

Comparisons can also be made of a single ratio across distributions and between systems. For example the values of n1/n0 for E = 0, 1, 2, 3 are

(N = 4) : 0, ⅓, ½, ⅗
(N = 3) : 0, ½, ⅔, ¾

Since for a macroscopic system

tp06

this implies that for a given value of E the 4-oscillator system is colder than the 3-oscillator system. The same conclusion can be reached by looking at the ratio of successive W’s for the 4-oscillator system sharing 0 to 3 units of thermal energy

Energy E (in units of ε) Total microstates W Ratio of successive W’s
0 1
1 4 4
2 10
3 20 2

 

For the 4-oscillator system the ratios of successive W’s are larger than the corresponding ratios for the 3-oscillator system. The logarithms of these ratios are inversely proportional to the absolute temperature, so the larger the ratio the lower the temperature.

– – – –

Finite differences

The differences between successive W’s for a 4-oscillator system are the values for a 3-oscillator system

W for (N =4) : 1, 4, 10, 20
Differences : 3, 6, 10

Likewise the differences between successive W’s for a 3-oscillator system and a 2-oscillator system

W for (N =3) : 1, 3, 6, 10
Differences : 2, 3, 4

Likewise for the differences between successive W’s for a 2-oscillator system and a 1-oscillator system

W for (N =2) : 1, 2, 3, 4
Differences : 1, 1, 1

This implies that W for the 4-particle system can be expressed as a cubic in n, and that W for the 3-particle system can be expressed as a quadratic in n etc. Evaluation of coefficients leads to the following formula progression

For N = 1

tp07

For N = 2

tp08

For N = 3

tp09

For N = 4

tp10

It appears that in general

tp11

Since n = E/ε and ε = hν, the above equation can be written

tp12

For a system of oscillators this formula describes the functional dependence of W microstates on the size of the particle ensemble (N), its energy (E), the mechanical frequency of its oscillators (ν) and Planck’s constant (h).

– – – –

Appendix I

Formula to be derived

For any single distribution among N oscillators where n0, n1,n2 … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

tp02

Derivation

In combinatorial analysis, the above comes into the category of permutations of sets with the possible occurrence of indistinguishable elements.

Consider the distribution of 3 units of energy across 4 oscillators such that one oscillator has two units, another has the remaining one unit, and the other two oscillators are in the ground state: {2100}

If each of the four numbers was distinct, there would be 4! possible ways to arrange them. But the two zeros are indistinguishable, so the number of ways is reduced by a factor of 2! The number of ways to arrange {2100} is therefore 4!/2! = 12.

1 and 2 occur only once in the above set, and the occurrence of 3 is zero. This does not result in a reduction in the number of possible ways to arrange {2100} since 1! = 1 and 0! = 1. Their presence in the denominator will have no effect, but for completeness we can write

4!/2!1!1!0!

to compute the number of microstates for the single distribution E = 3, N = 4, {2100} where n0 = 2, n1 = 1, n2 = 1 and n3 = 0.

In the general case, the formula for the number of microstates for a single energy distribution of E among N oscillators is

tp02

where the terms in the denominator are as defined above.

– – – –

P Mander April 2016

Advertisements

rev01

Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.

Reversibility and mechanical systems

The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.

Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes vanishingly small and the pulley moves infinitely slowly, as shown on the right.

We now ask the question – Under what circumstances does M1 do the maximum lifting work? Clearly the answer is visualized on the right, where the lifted weight M2 is as close as we can imagine to the weight of M1. In this situation the pulley moves infinitely slowly (like a nanometer in a zillion years!) and is indistinguishable from being at rest.

This state of being as close to equilibrium as we can possibly imagine is the condition of reversible change, where the infinitely slow lifting motion could be reversed by an infinitely small nudge in the opposite direction.

From this simple mechanical experiment we can draw an important conclusion: the work done under reversible conditions is the maximum work that the system can do.

Any other conditions i.e. when the pulley moves with finite, observable speed, are irreversible and the work done is less than the maximum work.

The irreversibility is explained by the fact that observable change inevitably involves some dissipation of energy, making it impossible to reverse the change and exactly restore the initial state of the system and surroundings.

– – – –

Reversibility and thermodynamic systems

The work-producing system so far considered has been purely mechanical – a pulley and weights. Thermodynamic systems produce work through different means such as temperature and pressure differences, but however the work is produced, the work done under reversible conditions is always the maximum work that a system can do.

In thermodynamic systems, heat q and work w are connected by the first law relationship

rev02

What this equation tells us is that for a given change in internal energy (ΔU), both the heat absorbed and the work done in a reversible change are the maximum possible. The corresponding irreversible process absorbs less heat and does less work.

It helps to think of this in simple numbers. U is a state function and therefore ΔU is a fixed amount regardless of the way the change is carried out. Say ΔU = 2 units and the reversible work w = 4 units. The heat q absorbed in this reversible change is therefore 6 units. These must be the maximum values of w and q, because ΔU is fixed at 2; for any other change than reversible change, w is less than 4 and so q is less than 6.

For an infinitesimal change, the inequality in relation to q can be written

rev03

and so for a change at temperature T

rev04

The term on the left defines the change in the state function entropy

rev05

Since reversible conditions equate to equilibrium and irreversible conditions equate to observable change, it follows that

rev06

These criteria are fundamental. They are true for all thermodynamic processes, subject only to the restriction that the system is a closed one i.e. there is no mass transfer between system and surroundings. It is from these expressions that the conclusion can be drawn – as famously stated by Clausius – that entropy increases towards a maximum in isolated systems.

Rudolf Clausius (1822-1888)

Rudolf Clausius (1822-1888)

– – – –

Die Entropie der Welt strebt einem Maximum zu

Consider an adiabatic change in a closed system: dq = 0 so the above criteria for equilibrium and observable change become dS = 0 and dS > 0 respectively. If the volume is also kept constant during the change, it follows from the first law that dU = 0. In other words the volume and internal energy of the system are constant and so the system is isolated, with no energy or mass transfer between system and surroundings.

Under these circumstances the direction of observable change is such that entropy increases towards a maximum; when there is equilibrium, the entropy is constant. The criteria for these conditions may be expressed as follows

rev07

Note:
The assertion that entropy increases towards a maximum is true only under the restricted conditions of constant U and V. Such statements as “the entropy of the universe tends to a maximum” therefore depend on assumptions, such as a non-expanding universe, that are not known to be fulfilled.

– – – –

P Mander March 2015

jcm1

James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

Historical background

Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to be created when Maxwell published the above relations in his 1871 textbook Theory of Heat. The startling fact is that Maxwell navigated his way to these relations using nothing more than a diagram of the Carnot cycle, allied to an ingenious exercise in plane geometry.

Another historical truth that modern derivations conceal is that entropy did not feature as the conjugate variable to temperature (θ) in Maxwell’s original relations; instead Maxwell used Rankine’s thermodynamic function (Φ) which is identical with – and predates – the state function entropy (S) introduced by Clausius in 1865.

Maxwell’s use of Φ instead of S was not a matter of personal preference. It could not have been otherwise, because Maxwell misunderstood the term entropy at the time when he wrote his book (1871), believing it to represent the available energy of a system. From a dimensional perspective – and one must remember that Maxwell was one of the founders of dimensional analysis – it was impossible for entropy as he understood it to be the conjugate variable to temperature. By contrast, it was clear to Maxwell that Rankine’s Φ had the requisite dimensions of ML2T-2θ-1.

Two years later, in an 1873 publication entitled A method of geometrical representation of the thermodynamic properties of substances by means of surfaces, the American physicist Josiah Willard Gibbs politely pointed out Maxwell’s error in regard to the units of measurement of entropy:

jcm2

Maxwell responded in a subsequent edition of Theory of Heat with a contrite apology for misleading his readers:

jcm3

– – – –

Carnot Cycle revisited

The centrepiece of the geometrical construction with which Maxwell proves his thermodynamic relations is a quadrilateral drawn 37 years earlier by Émile Clapeyron in his 1834 paper Mémoire sur la Puissance Motrice de la Chaleur (Memoir on the motive power of heat).

jcm4

When Émile Clapeyron drew this PV-plane representation of the Carnot cycle in 1834, heat was believed to be a conserved quantity. By the time Maxwell used the diagram in 1871, heat and work were understood to be interconvertible forms of energy, with energy being the conserved quantity.

This is the first analytical representation of the Carnot cycle, shown as a closed curve on a pressure-volume indicator diagram. The sides ab and cd represent isothermal lines, the sides ad and bc adiabatic lines. By assigning infinitely small values to the variations of volume and pressure during the successive operations of the cycle, Clapeyron renders this quadrilateral a parallelogram.

The area enclosed by the curve equates to the work done in a complete cycle, and Maxwell uses the following contrivance to set this area equal to unity.

Applying Carnot’s principle, Maxwell expresses the work W done as a function of the heat H supplied

W = H(T2 – T1)/T2

with T2 and T1 representing the absolute temperatures of the source and sink respectively.
Maxwell then defines

T2 – T1 = 1
H/T2 = 1

The conversion of heat into work is thus expressed as the product of a unit change in temperature T and a unit change in Rankine’s thermodynamic function Φ, equivalent to entropy S:

W = Δ1T . Δ1S = 1

Maxwell’s definitions also give the parallelogram the property that any line drawn from one isothermal line to the other, or from one adiabatic line to the other, is of unit length when reckoned in the respective dimensions of temperature or entropy. This is of central significance to what follows.

– – – –

Geometrical extensions

Maxwell’s geometric machinations consist in extending the isothermal (T1T2) and adiabatic lines (Φ1Φ2) of the original figure ABCD and adding vertical lines (pressure) and horizontal lines (volume) to create four further parallelograms with the aim of proving their areas also equal to unity, while at the same time enabling each of these areas to be expressed in terms of pressure and volume as a base-altitude product.

jcm5

As the image from Theory of Heat shown at the head of this article reveals, Maxwell did not fully trace out the perimeters of three (!) of the four added parallelograms. I have extended four lines to the arbitrarily labelled points E, F and H in order to complete the figure.

– parallelogram AKQD stands on the same base AD as ABCD and lies between the same parallels T1T2 so its area is also unity, expressible in terms of volume and pressure as the base-altitude product AK.Ak

– parallelogram ABEL stands on the same base AB as ABCD and lies between the same parallels Φ1Φ2 so its area is also unity, expressible in terms of volume and pressure as the base-altitude product AL.Al

– parallelogram AMFD stands on the same base AD as ABCD and lies between the same parallels T1T2 so its area is also unity, expressible in terms of pressure and volume as the base-altitude product AM.Am

– parallelogram ABHN stands on the same base AB as ABCD and lies between the same parallels Φ1Φ2 so its area is also unity, expressible in terms of pressure and volume as the base-altitude product AN.An

– line AD, which represents a unit rise in entropy at constant temperature, resolves into the vertical (pressure) and horizontal (volume) components Ak and Am

– line AB, which represents a unit rise in temperature at constant entropy, resolves into the vertical (pressure) and horizontal (volume) components Al and An

– in summary: ABCD = AK.Ak = AL.Al = AM.Am = AN.An = 1 [dimensions ML2T-2]

– – – –

Maxwell’s thermodynamic relations

Maxwell’s next step is to interpret the physical meaning of these four pairs of lines.

AK is the volume increase per unit rise in temperature at constant pressure: (∂V/∂T)P
Ak is the pressure decrease per unit rise in entropy at constant temperature: –(∂P/∂S)T

Recalling the property of partial derivatives that given the implicit function f(x,y,z) = 0

jcm6

Since AK = 1/Ak

(∂V/∂T)P = –(∂S/∂P)T

AL is the volume increase per unit rise in entropy at constant pressure: (∂V/∂S)P
Al is the pressure increase per unit rise in temperature at constant entropy: (∂P/∂T)S

Since AL = 1/Al

(∂V/∂S)P = (∂T/∂P)S

AM is the pressure increase per unit rise in temperature at constant volume: (∂P/∂T)V
Am is the volume increase per unit rise in entropy at constant temperature: (∂V/∂S)T

Since AM = 1/Am

(∂P/∂T)V = (∂S/∂V)T

AN is the pressure increase per unit rise in entropy at constant volume: (∂P/∂S)V
An is the volume decrease per unit rise in temperature at constant entropy: –(∂V/∂T)S

Since AN = 1/An

(∂P/∂S)V = –(∂T/∂V)S

– – – –

In his own words

I leave it to the man himself to conclude this post:

“We have thus obtained four relations among the physical properties of the substance. These four relations are not independent of each other, so as to rank as separate truths. Any one might be deduced from any other. The equality of the products AK, Ak &c., to the parallelogram ABCD and to each other is merely a geometrical truth, and does not depend on thermodynamic principles. What we learn from thermodynamics is that the parallelogram and the four products are each equal to unity, whatever be the nature of the substance or its condition as to pressure and temperature.”

jcm7

– – – –

P Mander August 2014

Constantin Carathéodory (1873-1950)

Constantin Carathéodory (1873-1950)

Update

This post has been translated into Greek by Giorgos Vachtanidis, and can be seen here.

Readers may also be interested to know that my supplementary blogpost “Carathéodory revisited” contains a proof of Carathéodory’s theorem: “If a differential dQ = ΣXidxi, possesses the property that in an arbitrarily close neighborhood of a point P defined by its coordinates (x1, x2,…, xn) there are points which cannot be connected to P along curves satisfying the equation dQ = 0, then dQ is integrable.” The supplementary post can be seen here

– – – –

Back in the days when I was a college student – the era when we wore our hair long, when elbow patches were commonplace and Woodstock was still fresh in our minds, the teaching of thermodynamics took place along two main routes.

The first was the classical route focused on heat and its convertibility into work, led philosophically by Carnot, Mayer and Joule, and developed mathematically by Clausius, Thomson (later Lord Kelvin), Helmholtz and Rankine. The second was the statistical route founded on a molecular model, and associated especially with the names of Boltzmann and Maxwell.

Nobody mentioned the third route. None of us were taught anything about the axiomatic approach to thermodynamics, published in 1909 in Mathematische Annalen under the title “Untersuchungen über die Grundlagen der Thermodynamik” [Examination of the foundations of thermodynamics] by a 36-year-old Greek mathematician called Constantin Carathéodory, who at the time was living in Hannover, Germany.

inholdt

The title page of Carathéodory’s 1909 paper in Mathematische Annalen

It is clear from the outset of his paper that Carathéodory had studied Gibbs’ magnum opus “On the Equilibrium of Heterogeneous Substances (1875-1878)”. And just like Gibbs, Carathéodory uses the internal energy U and the entropy S (introduced together with the absolute temperature T) as the fundamental building blocks upon which he constructs his version of thermodynamics.

But whereas Gibbs introduces entropy via the classical route taken by Clausius, Carathéodory finds it through a mathematical approach based on the geometric behavior of a certain class of partial differential equations called Pfaffians, named for the German mathematician Johann Friedrich Pfaff (1765-1825) who first studied their properties.

Carathéodory’s investigations start by revisiting the first law and reformulating the second law of thermodynamics in the form of two axioms. The first axiom applies to a multiphase system change under adiabatic conditions:

Ufinal – Uinitial + W = 0

Nothing original here, since this is an axiom of classical thermodynamics due to Clausius (1850). It asserts the existence of a form of energy known as internal energy U – an intrinsic property of a system whose changes under adiabatic conditions are equal and opposite to the external work W performed (for a closed system not in motion).

In Carathéodory’s approach however, heat is regarded as a derived rather than a fundamental quantity that appears when the adiabatic restriction is removed, i.e. ΔU+W ≠ 0.

The second axiom is a different matter altogether, and constitutes the real novelty of Carathéodory’s approach:

axiom2

This can be rendered in English as “In the neighborhood of any equilibrium state of a system (of any number of thermodynamic coordinates), there exist states that are inaccessible by reversible adiabatic processes.”

For a single substance, this postulate is obvious enough since reversible adiabatic processes are isenthalpic – a known result of classical thermodynamics. For such processes, all attainable states are represented by points on a curve for which entropy S = constant. There are other points which do not lie on this curve, and which represent states which cannot be reached by adiabatic transition.

But Carathéodory’s arguments go further, making this axiom applicable to a system of multiple bodies and multiple independent variables.

He shows that if in the neighborhood of any given point, corresponding to coordinates x1, x2,…, there are points not expressible by solutions of the Pfaffian equation X1dx1 + X2dx2 +… = 0, then for the expression X1dx1 + X2dx2 +… itself there exists an integrating factor.

The significance of this discovery is that via Carathéodory’s first axiom, the equation of adiabatic condition dQ = 0 admits an integrating factor, which when multiplying dQ renders the product an exact differential of a function whose value is therefore independent of the path between sets of coordinates.

The integrating factor (denominator) in this case is the absolute temperature T, and the path-independent integral ∫dQrev/T is the entropy change ΔS. This conjugate force-displacement pair, whose product is heat, arises directly from the geometric behavior and solutions of Pfaffians.

Using these partial differential expressions, Carathéodory obtains a formal thermodynamics without recourse to peculiar notions such as the flow of heat, or cumbrous conceptions such as imaginary heat engines and cycles of operation. In short, Carathéodory reduces the argument to a clean-cut consideration of lines and surfaces, together with a pair of axioms regarding the possibility of reaching certain states by adiabatic means.

It sounds very neat and tidy, as well as highly original, so how come it didn’t figure on our college curriculum? The answer to that question can be found in the reception afforded to Carathéodory’s masterwork by the scientific establishment of the day.

– – – –

Carathéodory’s thermodynamic theory got off to a rather inauspicious start in that it was ignored for the first 12 years of its existence. World War I came and went. Then in 1921, the German mathematician and physicist Max Born took note of Carathéodory’s work and published a set of articles on it entitled Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik [Critical considerations on the traditional representation of thermodynamics] in Physikalische Zeitschrift.

That got the ball rolling, but only as far as Max Planck, who besides being the towering authority in thermodynamics at the time, also turned out to be a severe critic of the new axiomatic method:

“nobody has up to now ever tried to reach, through adiabatic steps only, every neighborhood of any equilibrium state and to check if they are inaccessible,” Planck wrote, adding “this axiom gives us no hint which would allow us to differentiate the inaccessible from the accessible states.”

Others, impressed by the elegance of Carathéodory’s method, tried to render its formal austerity palatable to a wider audience. But these efforts met with no success, and when Lewis and Randall’s hugely influential and curriculum-setting textbook Thermodynamics and the Free Energy of Chemical Substances appeared in 1923, there was not a mention of Carathéodory or his theory.

Although there have been some notable attempts down the decades to champion Carathéodory’s cause, the axiomatic theory of 1909 has failed to achieve inclusion in mainstream academic teaching, and has been consigned to the catalogue of interesting curiosities. Planck’s enduring criticism of the theory’s failure to provide a compelling physical concept of entropy, together with the equally enduring difficulty of the math, seem to have played the deciding role.

syros.aegean.gr cc01a

Constantin Carathéodory (left) looking dapper in the company of his father, brother-in-law and sister. Carlsbad, Czechoslovakia 1898. Photo credit Wikimedia Commons

Suggested further reading:

1. The Structure of Physical Chemistry, C.N. Hinshelwood, Oxford University Press (1951)
Chapter III, Thermodynamic Principles, contains a concise introduction to Carathéodory’s theory, together with a discussion comparing its strengths and weaknesses with the classical approach. This book has been reissued as part of the Oxford Classic Texts series.

2. Constantin Carathéodory and the axiomatic thermodynamics, L. Pogliani and M. Berberan-Santos, Journal of Mathematical Chemistry Vol. 28, Nos. 1–3, 2000
This paper reviews the development of Carathéodory’s theory and explores some aspects of Pfaffians, the mathematical tools of axiomatic thermodynamics. A brief biography is also included.
PDF downloadable from http://web.ist.utl.pt/~berberan/data/68.pdf

and if you’re feeling brave…

3. Examination of the foundations of thermodynamics – English translation of Carathéodory’s 1909 paper
PDF downloadable from http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf

– – – –

P Mander January 2014

mixedup

Tucked away at the back of Volume One of The Scientific Papers of J. Willard Gibbs, is a brief chapter headed ‘Unpublished Fragments’. It contains a list of nine subject headings for a supplement that Professor Gibbs was planning to write to his famous paper “On the Equilibrium of Heterogeneous Substances”. Sadly, he completed his notes for only two subjects before his death in April 1903, so we will never know what he had in mind to write about the sixth subject in the list: On entropy as mixed-up-ness.

Mixed-up-ness. It’s a catchy phrase, with an easy-to-grasp quality that brings entropy within the compass of minds less given to abstraction. That’s no bad thing, but without Gibbs’ guidance as to exactly what he meant by it, mixed-up-ness has taken on a life of its own and has led to entropy acquiring the derivative associations of chaos and disorder – again, easy-to-grasp ideas since they are fairly familiar occurrences in the lives of just about all of us.

Freed from connexion with more esoteric notions such as spontaneity, entropy has become very easy to recognise in the world around us as a purportedly scientific explanation of all sorts of mixed-up-ness, from unmade beds and untidy piles of paperwork to dysfunctional personal relationships, horse meat in the food chain and the ultimate breakdown of civilization as we know it.

This freely-associated understanding of entropy is now well-entrenched in popular culture and is unlikely to be modified. But in the parallel universe occupied by students of classical thermodynamics, chaotic bed linen and disordered documentation are not seen as entropy-driven manifestations. Sure, how these things come about may defy rational explanation, but they do not happen by themselves. Some external agency, human or otherwise, is always involved.

To physical chemists of the old school like myself, entropy has always been seen as the driver of spontaneously occurring thermodynamic processes, in which the combined entropy of system and surroundings increases to a maximum at equilibrium. This view of entropy partly explains why many of us had difficulty in absorbing the notion of entropy as chaos, since equilibrium always seemed to us a very calm and peaceful thing, quite the opposite of chaos.

Furthermore, we were quite sure that entropy was an extensive property, i.e. one that is dependent on the amount of substance or substances present in a system. But disorder didn’t at all have the feeling of an extensive property. If one (theoretically) divided a thermodynamically disordered system into two equal parts, would each part be half as disordered as the whole? We didn’t think so. To us, there were serious conceptual obstacles to accepting the notion of entropy as disorder.

But while our fundamental understanding of entropy was grounded in the thermal theories of Rankine and Clausius, we did give a statistical nod in the direction of Boltzmann when seeking to explain spontaneous isothermal phenomena. We accepted the notion of aggregation and dispersal as arbiters of entropy change, which we viewed (rightly or wrongly) as separate and distinct from changes in thermal entropy. We even had a name for it – configurational entropy.

Having not one but two different kinds of entropy to play with turned out to be quite useful at times. For example, it helped to explain counter-intuitive spontaneous happenings such as the following:

seeding

This is an experiment I remember well from my college days. The diagram shows a sealed Dewar flask containing a supercooled, saturated solution of sodium thiosulphate (aka thiosulfate). A tiny seeding crystal is dropped through a hole in the lid. Crystallization immediately occurs, with an apparent increase in organisation as piles of highly regular crystals form in the solution. It’s an awesome sight to behold.

The experiment provides an unequivocal demonstration that visually-assessed disorganisation and entropy cannot be regarded as synonymous, for while the former unquestionably decreases, the latter must surely increase because the process is spontaneous.

And in overall terms, indeed it does. Although the configurational entropy of the system decreases due to the aggregation of Na+ and S2O32- ions into crystals, the other kind of entropy – thermal entropy – more than compensates as the heat of crystallization causes the temperature of the system to rise. For the whole process ΔSsystem > 0, and therefore ΔSuniverse >0 since the system is isolated from its surroundings.

As I said, having two kinds of entropy to play with can be useful in explaining things that are otherwise counter-intuitive. The above experiment also serves to show that the fashion in popular culture to interpret entropy simply as mixed-up-ness can end up being more than mildly misleading.