Posts Tagged ‘entropy’


Reversible change is a key concept in classical thermodynamics. It is important to understand what is meant by the term as it is closely allied to other important concepts such as equilibrium and entropy. But reversible change is not an easy idea to grasp – it helps to be able to visualize it.

Reversibility and mechanical systems

The simple mechanical system pictured above provides a useful starting point. The aim of the experiment is to see how much weight can be lifted by the fixed weight M1. Experience tells us that if a small weight M2 is attached – as shown on the left – then M1 will fall fast while M2 is pulled upwards at the same speed.

Experience also tells us that as the weight of M2 is increased, the lifting speed will decrease until a limit is reached when the weight difference between M2 and M1 becomes vanishingly small and the pulley moves infinitely slowly, as shown on the right.

We now ask the question – Under what circumstances does M1 do the maximum lifting work? Clearly the answer is visualized on the right, where the lifted weight M2 is as close as we can imagine to the weight of M1. In this situation the pulley moves infinitely slowly (like a nanometer in a zillion years!) and is indistinguishable from being at rest.

This state of being as close to equilibrium as we can possibly imagine is the condition of reversible change, where the infinitely slow lifting motion could be reversed by an infinitely small nudge in the opposite direction.

From this simple mechanical experiment we can draw an important conclusion: the work done under reversible conditions is the maximum work that the system can do.

Any other conditions i.e. when the pulley moves with finite, observable speed, are irreversible and the work done is less than the maximum work.

The irreversibility is explained by the fact that observable change inevitably involves some dissipation of energy, making it impossible to reverse the change and exactly restore the initial state of the system and surroundings.

– – – –

Reversibility and thermodynamic systems

The work-producing system so far considered has been purely mechanical – a pulley and weights. Thermodynamic systems produce work through different means such as temperature and pressure differences, but however the work is produced, the work done under reversible conditions is always the maximum work that a system can do.

In thermodynamic systems, heat q and work w are connected by the first law relationship


What this equation tells us is that for a given change in internal energy (ΔU), both the heat absorbed and the work done in a reversible change are the maximum possible. The corresponding irreversible process absorbs less heat and does less work.

It helps to think of this in simple numbers. U is a state function and therefore ΔU is a fixed amount regardless of the way the change is carried out. Say ΔU = 2 units and the reversible work w = 4 units. The heat q absorbed in this reversible change is therefore 6 units. These must be the maximum values of w and q, because ΔU is fixed at 2; for any other change than reversible change, w is less than 4 and so q is less than 6.

For an infinitesimal change, the inequality in relation to q can be written


and so for a change at temperature T


The term on the left defines the change in the state function entropy


Since reversible conditions equate to equilibrium and irreversible conditions equate to observable change, it follows that


These criteria are fundamental. They are true for all thermodynamic processes, subject only to the restriction that the system is a closed one i.e. there is no mass transfer between system and surroundings. It is from these expressions that the conclusion can be drawn – as famously stated by Clausius – that entropy increases towards a maximum in isolated systems.

Rudolf Clausius (1822-1888)

Rudolf Clausius (1822-1888)

– – – –

Die Entropie der Welt strebt einem Maximum zu

Consider an adiabatic change in a closed system: dq = 0 so the above criteria for equilibrium and observable change become dS = 0 and dS > 0 respectively. If the volume is also kept constant during the change, it follows from the first law that dU = 0. In other words the volume and internal energy of the system are constant and so the system is isolated, with no energy or mass transfer between system and surroundings.

Under these circumstances the direction of observable change is such that entropy increases towards a maximum; when there is equilibrium, the entropy is constant. The criteria for these conditions may be expressed as follows


The assertion that entropy increases towards a maximum is true only under the restricted conditions of constant U and V. Such statements as “the entropy of the universe tends to a maximum” therefore depend on assumptions, such as a non-expanding universe, that are not known to be fulfilled.

– – – –

P Mander March 2015


James Clerk Maxwell and the geometrical figure with which he proved his famous thermodynamic relations

Historical background

Every student of thermodynamics sooner or later encounters the Maxwell relations – an extremely useful set of statements of equality among partial derivatives, principally involving the state variables P, V, T and S. They are general thermodynamic relations valid for all systems.

The four relations originally stated by Maxwell are easily derived from the (exact) differential relations of the thermodynamic potentials:

dU = TdS – PdV   ⇒   (∂T/∂V)S = –(∂P/∂S)V
dH = TdS + VdP   ⇒   (∂T/∂P)S = (∂V/∂S)P
dG = –SdT + VdP   ⇒   –(∂S/∂P)T = (∂V/∂T)P
dA = –SdT – PdV   ⇒   (∂S/∂V)T = (∂P/∂T)V

This is how we obtain these Maxwell relations today, but it disguises the history of their discovery. The thermodynamic state functions H, G and A were yet to be created when Maxwell published the above relations in his 1871 textbook Theory of Heat. The startling fact is that Maxwell navigated his way to these relations using nothing more than a diagram of the Carnot cycle, allied to an ingenious exercise in plane geometry.

Another historical truth that modern derivations conceal is that entropy did not feature as the conjugate variable to temperature (θ) in Maxwell’s original relations; instead Maxwell used Rankine’s thermodynamic function (Φ) which is identical with – and predates – the state function entropy (S) introduced by Clausius in 1865.

Maxwell’s use of Φ instead of S was not a matter of personal preference. It could not have been otherwise, because Maxwell misunderstood the term entropy at the time when he wrote his book (1871), believing it to represent the available energy of a system. From a dimensional perspective – and one must remember that Maxwell was one of the founders of dimensional analysis – it was impossible for entropy as he understood it to be the conjugate variable to temperature. By contrast, it was clear to Maxwell that Rankine’s Φ had the requisite dimensions of ML2T-2θ-1.

Two years later, in an 1873 publication entitled A method of geometrical representation of the thermodynamic properties of substances by means of surfaces, the American physicist Josiah Willard Gibbs politely pointed out Maxwell’s error in regard to the units of measurement of entropy:


Maxwell responded in a subsequent edition of Theory of Heat with a contrite apology for misleading his readers:


– – – –

Carnot Cycle revisited

The centrepiece of the geometrical construction with which Maxwell proves his thermodynamic relations is a quadrilateral drawn 37 years earlier by Émile Clapeyron in his 1834 paper Mémoire sur la Puissance Motrice de la Chaleur (Memoir on the motive power of heat).


When Émile Clapeyron drew this PV-plane representation of the Carnot cycle in 1834, heat was believed to be a conserved quantity. By the time Maxwell used the diagram in 1871, heat and work were understood to be interconvertible forms of energy, with energy being the conserved quantity.

This is the first analytical representation of the Carnot cycle, shown as a closed curve on a pressure-volume indicator diagram. The sides ab and cd represent isothermal lines, the sides ad and bc adiabatic lines. By assigning infinitely small values to the variations of volume and pressure during the successive operations of the cycle, Clapeyron renders this quadrilateral a parallelogram.

The area enclosed by the curve equates to the work done in a complete cycle, and Maxwell uses the following contrivance to set this area equal to unity.

Applying Carnot’s principle, Maxwell expresses the work W done as a function of the heat H supplied

W = H(T2 – T1)/T2

with T2 and T1 representing the absolute temperatures of the source and sink respectively.
Maxwell then defines

T2 – T1 = 1
H/T2 = 1

The conversion of heat into work is thus expressed as the product of a unit change in temperature T and a unit change in Rankine’s thermodynamic function Φ, equivalent to entropy S:

W = Δ1T . Δ1S = 1

Maxwell’s definitions also give the parallelogram the property that any line drawn from one isothermal line to the other, or from one adiabatic line to the other, is of unit length when reckoned in the respective dimensions of temperature or entropy. This is of central significance to what follows.

– – – –

Geometrical extensions

Maxwell’s geometric machinations consist in extending the isothermal (T1T2) and adiabatic lines (Φ1Φ2) of the original figure ABCD and adding vertical lines (pressure) and horizontal lines (volume) to create four further parallelograms with the aim of proving their areas also equal to unity, while at the same time enabling each of these areas to be expressed in terms of pressure and volume as a base-altitude product.


As the image from Theory of Heat shown at the head of this article reveals, Maxwell did not fully trace out the perimeters of three (!) of the four added parallelograms. I have extended four lines to the arbitrarily labelled points E, F and H in order to complete the figure.

– parallelogram AKQD stands on the same base AD as ABCD and lies between the same parallels T1T2 so its area is also unity, expressible in terms of volume and pressure as the base-altitude product AK.Ak

– parallelogram ABEL stands on the same base AB as ABCD and lies between the same parallels Φ1Φ2 so its area is also unity, expressible in terms of volume and pressure as the base-altitude product AL.Al

– parallelogram AMFD stands on the same base AD as ABCD and lies between the same parallels T1T2 so its area is also unity, expressible in terms of pressure and volume as the base-altitude product AM.Am

– parallelogram ABHN stands on the same base AB as ABCD and lies between the same parallels Φ1Φ2 so its area is also unity, expressible in terms of pressure and volume as the base-altitude product AN.An

– line AD, which represents a unit rise in entropy at constant temperature, resolves into the vertical (pressure) and horizontal (volume) components Ak and Am

– line AB, which represents a unit rise in temperature at constant entropy, resolves into the vertical (pressure) and horizontal (volume) components Al and An

– in summary: ABCD = AK.Ak = AL.Al = AM.Am = AN.An = 1 [dimensions ML2T-2]

– – – –

Maxwell’s thermodynamic relations

Maxwell’s next step is to interpret the physical meaning of these four pairs of lines.

AK is the volume increase per unit rise in temperature at constant pressure: (∂V/∂T)P
Ak is the pressure decrease per unit rise in entropy at constant temperature: –(∂P/∂S)T

Recalling the property of partial derivatives that given the implicit function f(x,y,z) = 0


Since AK = 1/Ak

(∂V/∂T)P = –(∂S/∂P)T

AL is the volume increase per unit rise in entropy at constant pressure: (∂V/∂S)P
Al is the pressure increase per unit rise in temperature at constant entropy: (∂P/∂T)S

Since AL = 1/Al

(∂V/∂S)P = (∂T/∂P)S

AM is the pressure increase per unit rise in temperature at constant volume: (∂P/∂T)V
Am is the volume increase per unit rise in entropy at constant temperature: (∂V/∂S)T

Since AM = 1/Am

(∂P/∂T)V = (∂S/∂V)T

AN is the pressure increase per unit rise in entropy at constant volume: (∂P/∂S)V
An is the volume decrease per unit rise in temperature at constant entropy: –(∂V/∂T)S

Since AN = 1/An

(∂P/∂S)V = –(∂T/∂V)S

– – – –

In his own words

I leave it to the man himself to conclude this post:

“We have thus obtained four relations among the physical properties of the substance. These four relations are not independent of each other, so as to rank as separate truths. Any one might be deduced from any other. The equality of the products AK, Ak &c., to the parallelogram ABCD and to each other is merely a geometrical truth, and does not depend on thermodynamic principles. What we learn from thermodynamics is that the parallelogram and the four products are each equal to unity, whatever be the nature of the substance or its condition as to pressure and temperature.”


– – – –

P Mander August 2014

Constantin Carathéodory (1873-1950)

Constantin Carathéodory (1873-1950)


This post has been translated into Greek by Giorgos Vachtanidis, and can be seen here.

Readers may also be interested to know that my supplementary blogpost “Carathéodory revisited” contains a proof of Carathéodory’s theorem: “If a differential dQ = ΣXidxi, possesses the property that in an arbitrarily close neighborhood of a point P defined by its coordinates (x1, x2,…, xn) there are points which cannot be connected to P along curves satisfying the equation dQ = 0, then dQ is integrable.” The supplementary post can be seen here

– – – –

Back in the days when I was a college student – the era when we wore our hair long, when elbow patches were commonplace and Woodstock was still fresh in our minds, the teaching of thermodynamics took place along two main routes.

The first was the classical route focused on heat and its convertibility into work, led philosophically by Carnot, Mayer and Joule, and developed mathematically by Clausius, Thomson (later Lord Kelvin), Helmholtz and Rankine. The second was the statistical route founded on a molecular model, and associated especially with the names of Boltzmann and Maxwell.

Nobody mentioned the third route. None of us were taught anything about the axiomatic approach to thermodynamics, published in 1909 in Mathematische Annalen under the title “Untersuchungen über die Grundlagen der Thermodynamik” [Examination of the foundations of thermodynamics] by a 36-year-old Greek mathematician called Constantin Carathéodory, who at the time was living in Hannover, Germany.


The title page of Carathéodory’s 1909 paper in Mathematische Annalen

It is clear from the outset of his paper that Carathéodory had studied Gibbs’ magnum opus “On the Equilibrium of Heterogeneous Substances (1875-1878)”. And just like Gibbs, Carathéodory uses the internal energy U and the entropy S (introduced together with the absolute temperature T) as the fundamental building blocks upon which he constructs his version of thermodynamics.

But whereas Gibbs introduces entropy via the classical route taken by Clausius, Carathéodory finds it through a mathematical approach based on the geometric behavior of a certain class of partial differential equations called Pfaffians, named for the German mathematician Johann Friedrich Pfaff (1765-1825) who first studied their properties.

Carathéodory’s investigations start by revisiting the first law and reformulating the second law of thermodynamics in the form of two axioms. The first axiom applies to a multiphase system change under adiabatic conditions:

Ufinal – Uinitial + W = 0

Nothing original here, since this is an axiom of classical thermodynamics due to Clausius (1850). It asserts the existence of a form of energy known as internal energy U – an intrinsic property of a system whose changes under adiabatic conditions are equal and opposite to the external work W performed (for a closed system not in motion).

In Carathéodory’s approach however, heat is regarded as a derived rather than a fundamental quantity that appears when the adiabatic restriction is removed, i.e. ΔU+W ≠ 0.

The second axiom is a different matter altogether, and constitutes the real novelty of Carathéodory’s approach:


This can be rendered in English as “In the neighborhood of any equilibrium state of a system (of any number of thermodynamic coordinates), there exist states that are inaccessible by reversible adiabatic processes.”

For a single substance, this postulate is obvious enough since reversible adiabatic processes are isenthalpic – a known result of classical thermodynamics. For such processes, all attainable states are represented by points on a curve for which entropy S = constant. There are other points which do not lie on this curve, and which represent states which cannot be reached by adiabatic transition.

But Carathéodory’s arguments go further, making this axiom applicable to a system of multiple bodies and multiple independent variables.

He shows that if in the neighborhood of any given point, corresponding to coordinates x1, x2,…, there are points not expressible by solutions of the Pfaffian equation X1dx1 + X2dx2 +… = 0, then for the expression X1dx1 + X2dx2 +… itself there exists an integrating factor.

The significance of this discovery is that via Carathéodory’s first axiom, the equation of adiabatic condition dQ = 0 admits an integrating factor, which when multiplying dQ renders the product an exact differential of a function whose value is therefore independent of the path between sets of coordinates.

The integrating factor (denominator) in this case is the absolute temperature T, and the path-independent integral ∫dQrev/T is the entropy change ΔS. This conjugate force-displacement pair, whose product is heat, arises directly from the geometric behavior and solutions of Pfaffians.

Using these partial differential expressions, Carathéodory obtains a formal thermodynamics without recourse to peculiar notions such as the flow of heat, or cumbrous conceptions such as imaginary heat engines and cycles of operation. In short, Carathéodory reduces the argument to a clean-cut consideration of lines and surfaces, together with a pair of axioms regarding the possibility of reaching certain states by adiabatic means.

It sounds very neat and tidy, as well as highly original, so how come it didn’t figure on our college curriculum? The answer to that question can be found in the reception afforded to Carathéodory’s masterwork by the scientific establishment of the day.

– – – –

Carathéodory’s thermodynamic theory got off to a rather inauspicious start in that it was ignored for the first 12 years of its existence. World War I came and went. Then in 1921, the German mathematician and physicist Max Born took note of Carathéodory’s work and published a set of articles on it entitled Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik [Critical considerations on the traditional representation of thermodynamics] in Physikalische Zeitschrift.

That got the ball rolling, but only as far as Max Planck, who besides being the towering authority in thermodynamics at the time, also turned out to be a severe critic of the new axiomatic method:

“nobody has up to now ever tried to reach, through adiabatic steps only, every neighborhood of any equilibrium state and to check if they are inaccessible,” Planck wrote, adding “this axiom gives us no hint which would allow us to differentiate the inaccessible from the accessible states.”

Others, impressed by the elegance of Carathéodory’s method, tried to render its formal austerity palatable to a wider audience. But these efforts met with no success, and when Lewis and Randall’s hugely influential and curriculum-setting textbook Thermodynamics and the Free Energy of Chemical Substances appeared in 1923, there was not a mention of Carathéodory or his theory.

Although there have been some notable attempts down the decades to champion Carathéodory’s cause, the axiomatic theory of 1909 has failed to achieve inclusion in mainstream academic teaching, and has been consigned to the catalogue of interesting curiosities. Planck’s enduring criticism of the theory’s failure to provide a compelling physical concept of entropy, together with the equally enduring difficulty of the math, seem to have played the deciding role. cc01a

Constantin Carathéodory (left) looking dapper in the company of his father, brother-in-law and sister. Carlsbad, Czechoslovakia 1898. Photo credit Wikimedia Commons

Suggested further reading:

1. The Structure of Physical Chemistry, C.N. Hinshelwood, Oxford University Press (1951)
Chapter III, Thermodynamic Principles, contains a concise introduction to Carathéodory’s theory, together with a discussion comparing its strengths and weaknesses with the classical approach. This book has been reissued as part of the Oxford Classic Texts series.

2. Constantin Carathéodory and the axiomatic thermodynamics, L. Pogliani and M. Berberan-Santos, Journal of Mathematical Chemistry Vol. 28, Nos. 1–3, 2000
This paper reviews the development of Carathéodory’s theory and explores some aspects of Pfaffians, the mathematical tools of axiomatic thermodynamics. A brief biography is also included.
PDF downloadable from

and if you’re feeling brave…

3. Examination of the foundations of thermodynamics – English translation of Carathéodory’s 1909 paper
PDF downloadable from

– – – –

P Mander January 2014


Tucked away at the back of Volume One of The Scientific Papers of J. Willard Gibbs, is a brief chapter headed ‘Unpublished Fragments’. It contains a list of nine subject headings for a supplement that Professor Gibbs was planning to write to his famous paper “On the Equilibrium of Heterogeneous Substances”. Sadly, he completed his notes for only two subjects before his death in April 1903, so we will never know what he had in mind to write about the sixth subject in the list: On entropy as mixed-up-ness.

Mixed-up-ness. It’s a catchy phrase, with an easy-to-grasp quality that brings entropy within the compass of minds less given to abstraction. That’s no bad thing, but without Gibbs’ guidance as to exactly what he meant by it, mixed-up-ness has taken on a life of its own and has led to entropy acquiring the derivative associations of chaos and disorder – again, easy-to-grasp ideas since they are fairly familiar occurrences in the lives of just about all of us.

Freed from connexion with more esoteric notions such as spontaneity, entropy has become very easy to recognise in the world around us as a purportedly scientific explanation of all sorts of mixed-up-ness, from unmade beds and untidy piles of paperwork to dysfunctional personal relationships, horse meat in the food chain and the ultimate breakdown of civilization as we know it.

This freely-associated understanding of entropy is now well-entrenched in popular culture and is unlikely to be modified. But in the parallel universe occupied by students of classical thermodynamics, chaotic bed linen and disordered documentation are not seen as entropy-driven manifestations. Sure, how these things come about may defy rational explanation, but they do not happen by themselves. Some external agency, human or otherwise, is always involved.

To physical chemists of the old school like myself, entropy has always been seen as the driver of spontaneously occurring thermodynamic processes, in which the combined entropy of system and surroundings increases to a maximum at equilibrium. This view of entropy partly explains why many of us had difficulty in absorbing the notion of entropy as chaos, since equilibrium always seemed to us a very calm and peaceful thing, quite the opposite of chaos.

Furthermore, we were quite sure that entropy was an extensive property, i.e. one that is dependent on the amount of substance or substances present in a system. But disorder didn’t at all have the feeling of an extensive property. If one (theoretically) divided a thermodynamically disordered system into two equal parts, would each part be half as disordered as the whole? We didn’t think so. To us, there were serious conceptual obstacles to accepting the notion of entropy as disorder.

But while our fundamental understanding of entropy was grounded in the thermal theories of Rankine and Clausius, we did give a statistical nod in the direction of Boltzmann when seeking to explain spontaneous isothermal phenomena. We accepted the notion of aggregation and dispersal as arbiters of entropy change, which we viewed (rightly or wrongly) as separate and distinct from changes in thermal entropy. We even had a name for it – configurational entropy.

Having not one but two different kinds of entropy to play with turned out to be quite useful at times. For example, it helped to explain counter-intuitive spontaneous happenings such as the following:


This is an experiment I remember well from my college days. The diagram shows a sealed Dewar flask containing a supercooled, saturated solution of sodium thiosulphate (aka thiosulfate). A tiny seeding crystal is dropped through a hole in the lid. Crystallization immediately occurs, with an apparent increase in organisation as piles of highly regular crystals form in the solution. It’s an awesome sight to behold.

The experiment provides an unequivocal demonstration that visually-assessed disorganisation and entropy cannot be regarded as synonymous, for while the former unquestionably decreases, the latter must surely increase because the process is spontaneous.

And in overall terms, indeed it does. Although the configurational entropy of the system decreases due to the aggregation of Na+ and S2O32- ions into crystals, the other kind of entropy – thermal entropy – more than compensates as the heat of crystallization causes the temperature of the system to rise. For the whole process ΔSsystem > 0, and therefore ΔSuniverse >0 since the system is isolated from its surroundings.

As I said, having two kinds of entropy to play with can be useful in explaining things that are otherwise counter-intuitive. The above experiment also serves to show that the fashion in popular culture to interpret entropy simply as mixed-up-ness can end up being more than mildly misleading.

William John Macquorn Rankine (1820-1872) Photo credit: Wikipedia

William John Macquorn Rankine (1820-1872) Photo credit: Wikipedia

On Thursday 19 January 1854, a 33-year-old Scottish engineer and physicist named William Rankine read a paper before the Royal Society in London, of which he had just recently been elected a fellow. 

Rankine’s paper, entitled “On the geometrical representation of the expansive action of heat, and the theory of thermodynamic engines”, came at crucial time in the development of the new science of heat and work. James Joule and William Thomson had begun publishing the results of their important experiments in Manchester (see my previous post for more on this), and Rudolf Clausius in Germany had recently given a mathematical statement of the first law, equating the change in the internal energy of a thermodynamic system to the heat received and the mechanical work done:

ΔU = Q – W. 

Rankine was in the right place at the right time to build on the theoretical foundations that Thomson and Clausius were already fashioning, but he did no such thing. He was not one for following anyone else’s lead. Like Kipling’s Cat, He Walked by Himself. He had his own way of looking at things; he did things his way.

The engineer from Edinburgh certainly had the necessary attributes for doing so. He had plenty of practical experience as a railway and waterway engineer, he had a thorough grounding in higher mathematics, dynamics and physics, and he was the possessor of a remarkable scientific imagination – a characteristic that was to prove not altogether advantageous, as we shall see.

– – – –

Indications of Rankine’s different way of looking at things were evident right from the start of his paper. Apart from using the pressure volume diagram as his geometrical framework, there is little if anything in common with the ‘Carnot Cycle’ analytical approach previously taken by Clapeyron and Clausius.


For a start, Rankine dispenses completely with isothermal curves in Fig.3, where OX is the line of no pressure and OY is the line of no volume. His ‘cycle’ consists of an upper and lower curve of no transmission of heat [i.e. adiabatic curves] which are infinitely extended, a line of constant volume, and a curvilinear line ACB representing an arbitrary succession of volumes and pressures through which the working substance is supposed to pass in changing from state A to state B.

He draws ordinates AVA and BVB from points A and B down to the line of no pressure, and calls the area VAACBVB the ‘expansive power’ developed during the operation ACB. He shows by an astute piece of reasoning that the indefinitely-prolonged area MACBN is exactly equal to the ‘heat received’ (HA,B) by the working substance during the operation ACB. He then advances the theorem that the difference between the ‘heat received’ and the ‘expansive power’:


 depends simply on the initial and final points A and B, and not on the form of the curve ACB. The above expression is regarded by Rankine as the energy stored up (his italics) in the working substance during the operation ACB. He identifies this stored energy as the sum of two quantities – the increase of what he calls the ‘actual energy of heat’ (Q) of the substance in passing from state A to state B:

ΔQ = QB – QA

and the change in what he calls the ‘potential energy of molecular action’ (S) [not to be confused with the modern symbol for entropy] in passing from state A to state B:

ΔS = SB – SA



This is a form of what he calls the general equation of the expansive action of heat, which was the object of his geometrical reasoning. Rankine’s approach is thus wholly different to that of Clausius, who gained his insights into path-independent thermodynamic functions by considering the operations of a complete cycle. Rankine instead arrives at a path-independent result by considering an arbitrary curve on a diagram of energy.

– – – –

Rankine’s ‘potential energy of molecular action’ needed to be expressed in terms of measurable quantities in order to give a definite form to equation 2, and in the next section of the paper he does this, at the same time introducing the symbol Ψ to denote the sum of the actual energy of heat and the potential energy of molecular action present in the working substance in any state:


Rankine assigns no name to Ψ, and although it is clear from the theorem already advanced that a change in this quantity between initial and final states is independent of path, he draws no attention to the fact. Perhaps Rankine simply didn’t recognise at the time that he had defined the thermodynamic state function known as internal energy, albeit on his own terms.

– – – –

As he continued with the reading of his paper before the Royal Society on that January day in 1854, neither Rankine nor his audience were aware that a defining moment in the history of thermodynamics was shortly to take place.

It began at the third corollary to the general equation of the expansive action of heat, where Rankine observes that for two adiabatic curves infinitely close together, “the ratio of the heat consumed in passing from one of those curves to the other, to the actual heat present, will be the same”. He expresses this ratio as:


and proceeds to show that F, which he simply labels “a thermodynamic function”, has a constant value for any adiabatic curve. This was the second thermodynamic function that Rankine had introduced in as many pages, although again he seems not to have attached particular significance to this.

In the fourth section of his paper, having introduced the relation


between the ‘actual energy of heat’ of a substance, its specific heat, and the absolute temperature, Rankine converts F into a more convenient thermodynamic function Φ by defining:


The fortunate effect of this conversion is to remove the functional relationship with the dubiously-defined Q and give Φ the dimensions of energy (as heat received) per degree of absolute temperature. To students of classical thermodynamics, those units will no doubt sound familiar.

In a theorem under Proposition XII, Rankine then introduces both Ψ and Φ into his general equation of the expansive action of heat, with the following result:


I have no way of knowing what reaction this equation elicited from those assembled at the Royal Society when Rankine wrote it on the blackboard, or recited it, or whatever. What I do know is that when I first saw it in a volume of Rankine’s miscellaneous papers held at the University of Wisconsin and publicly available on, I nearly fell off my chair.

Exchanging Rankine’s symbols for their modern equivalents, the equation reads:

\Delta U=\int TdS-\int PdV

This is the fundamental relation of thermodynamics.

Eleven years before Clausius, Rankine gave mathematical expression to the thermodynamic state function we now call entropy, from a consideration of infinitely close adiabatic curves (incidentally it was Rankine who introduced the term ‘adiabatic’). And by putting both internal energy (Ψ) and entropy (Φ) together in equation 40A, Rankine made the first mathematical statement that combined both the first and second laws of thermodynamics.

– – – –

Although Φ appeared in print more than a decade before Clausius arrived at another symbolic expression  for the same thing and called it entropy, Rankine’s thermodynamic function gained little from its antecedence. The reason derives principally from the fact that Rankine was a model-based thinker with a fanciful imagination. He formed intricate mental pictures of molecular motion to explain the phenomenon of heat, and when the math he applied appeared to produce sensible results, he assumed that his model – which he called the hypothesis of molecular vortices – was correct.

Up to this point in the article I have deliberately refrained from mentioning this aspect, so as not to obscure the (surprisingly) hypothesis-independent sequence of deduction by which Rankine reaches equation 40A. But there is no escaping the fact that much of the paper read by Rankine before the Royal Society in January 1854 is laden with hypothetical apparatus, accompanied by a lexicon of abstruse terminology such as ‘actual energy of heat’, to which it is difficult to attach any distinct meaning.

It proved altogether too much for Rankine’s contemporaries to swallow. When Clausius presented his conception of entropy in 1865, they found it much more palatable, and Rankine’s Φ was forgotten.

– – – –

Rankine may have failed to convince colleagues of his molecular vortices, but in no way did this dampen his academic enthusiasm. In 1855 he wrote the very first formal treatise on thermodynamics for Nichol’s Cyclopedia, and in 1857 he penned an important treatise on shipbuilding as well as his famous Manual of Applied Mechanics. He followed this up with his Manual of Civil Engineering in 1861, and by 1865 had become a consulting engineer. In 1869, another great engineering treatise, Machinery and Millwork, appeared. During his time as a fellow of the Royal Society, Rankine published no fewer than 150 papers on mathematical, thermodynamic and engineering subjects, yet still found time to study botany, learn to play the cello and piano, and develop other creative aspects of his intellect, including the writing of humorous songs. One of his early efforts was “The mathematician in love”, in the following stanzas of which Rankine wittily propounds his theory of love and marriage:

William Rankine never found the time to test this theory in practice. He died a bachelor on Christmas Eve 1872, at the age of 52, of overwork.

– – – –