Archive for the ‘history of science’ Category

arr00

arr09

The Arrhenius equation explains why chemical reactions generally go much faster when you heat them up. The equation was actually first given by the Dutch physical chemist JH van ‘t Hoff in 1884, but it was the Swedish physical chemist Svante Arrhenius (pictured above) who in 1889 interpreted the equation in terms of activation energy, thereby opening up an important new dimension to the study of reaction rates.

– – – –

Temperature and reaction rate

The systematic study of chemical kinetics can be said to have begun in 1850 with Ludwig Wilhelmy’s pioneering work on the kinetics of sucrose inversion. Right from the start, it was realized that reaction rates showed an appreciable dependence on temperature, but it took four decades before real progress was made towards quantitative understanding of the phenomenon.

In 1889, Arrhenius penned a classic paper in which he considered eight sets of published data on the effect of temperature on reaction rates. In each case he showed that the rate constant could be represented as an explicit function of the absolute temperature:

arr01

where both A and C are constants for the particular reaction taking place at temperature T. In his paper, Arrhenius listed the eight sets of published data together with the equations put forward by their respective authors to express the temperature dependence of the rate constant. In one case, the equation – stated in logarithmic form – was identical to that proposed by Arrhenius

arr02

where T is the absolute temperature and a and b are constants. This equation was published five years before Arrhenius’ paper in a book entitled Études de Dynamique Chimique. The author was J. H. van ‘t Hoff.

– – – –

Dynamic equilibrium

In the Études of 1884, van ‘t Hoff compiled a contemporary encyclopædia of chemical kinetics. It is an extraordinary work, containing all that was previously known as well as a great deal that was entirely new. At the start of the section on chemical equilibrium he states (without proof) the thermodynamic equation, sometimes called the van ‘t Hoff isochore, which quantifies the displacement of equilibrium with temperature. In modern notation it reads:

arr03

where Kc is the equilibrium constant expressed in terms of concentrations, ΔH is the heat of reaction and T is the absolute temperature. In a footnote to this famous and thermodynamically exact equation, van ‘t Hoff builds a bridge from thermodynamics to kinetics by advancing the idea that a chemical reaction can take place in both directions, and that the thermodynamic equilibrium constant Kc is in fact the quotient of the kinetic velocity constants for the forward (k1) and reverse (k-1) reactions

wil13
wil14

Substituting this quotient in the original equation leads immediately to

arr04

van ‘t Hoff then argues that the rate constants will be influenced by two different energy terms E1 and E-1, and splits the above into two equations

arr05

where the two energies are such that E1 – E-1 = ΔH

In the Études, van ‘t Hoff recognized that ΔH might or might not be temperature independent, and considered both possibilities. In the former case, he could integrate the equation to give the solution

arr06

From a starting point in thermodynamics, van ‘t Hoff engineered this kinetic equation through a characteristically self-assured thought process. And it was this equation that the equally self-assured Svante Arrhenius seized upon for his own purposes, expanding its application to explain the results of other researchers, and enriching it with his own idea for how the equation should be interpreted.

– – – –

Activation energy

It is a well-known result of the kinetic theory of gases that the average kinetic energy per mole of gas (EK) is given by

arr07

Since the only variable on the RHS is the absolute temperature T, we can conclude that doubling the temperature will double the average kinetic energy of the molecules. This set Arrhenius thinking, because the eight sets of published data in his 1889 paper showed that the effect of temperature on the rates of chemical processes was generally much too large to be explained on the basis of how temperature affects the average kinetic energy of the molecules.

The clue to solving this mystery was provided by James Clerk Maxwell, who in 1860 had worked out the distribution of molecular velocities from the laws of probability. Maxwell’s distribution law enables the fraction of molecules possessing a kinetic energy exceeding some arbitrary value E to be calculated.

It is convenient to consider the distribution of molecular velocities in two dimensions instead of three, since the distribution law so obtained gives very similar results and is much simpler to apply. At absolute temperature T, the proportion of molecules for which the kinetic energy exceeds E is given by

arr08

where n is the number of molecules with kinetic energy greater than E, and N is the total number of molecules. This is exactly the exponential expression which occurs in the velocity constant equation derived by van ‘t Hoff from thermodynamic principles, which Arrhenius showed could be fitted to temperature dependence data from several published sources.

Compared with the average kinetic energy calculation, this exponential expression yields very different results. At 1000K, the fraction of molecules having a greater energy than, say, 80 KJ is 0.0000662, while at 2000K the fraction is 0.00814. So the temperature change which doubles the number of molecules with the average energy will increase the number of molecules with E > 80 KJ by a factor of more than a hundred.

Here was the clue Arrhenius was seeking to explain why increased temperature had such a marked effect on reaction rate. He reasoned it was because molecules needed sufficiently more energy than the average – the activation energy E – to undergo reaction, and that the fraction of these molecules in the reaction mix was an exponential function of temperature.

– – – –

The meaning of A

But back to the Arrhenius equation

arr09

I have always thought that calling the constant A the ‘pre-exponential factor’ is a singularly pointless label. One could equally write the equation as

arr10

and call it the ‘post-exponential factor’. The position of A in relation to the exponential factor has no relevance.

A clue to the proper meaning of A is to note that e^(–E/RT) is dimensionless. The units of A are therefore the same as the units of k. But what are the units of k?

The answer depends on whether one’s interest area is kinetics or thermodynamics. In kinetics, the concentration of chemical species present at equilibrium is generally expressed as molar concentration, giving rise to a range of possibilities for the units of the velocity constant k.

In thermodynamics however, the dimensions of k are uniform. This is because the chemical potential of reactants and products in any arbitrarily chosen state is expressed in terms of activity a, which is defined as a ratio in relation to a standard state and is therefore dimensionless.

When the arbitrarily chosen conditions represent those for equilibrium, the equilibrium constant K is expressed in terms of reactant (aA + bB + …) and product (mM + nN + …) activities

arr11

where the subscript e indicates that the activities are those for the system at equilibrium.

As students we often substitute molar concentrations for activities, since in many situations the activity of a chemical species is approximately proportional to its concentration. But if an equation is arrived at from consideration of the thermodynamic equilibrium constant K – as the Arrhenius equation was – it is important to remember that the associated concentration terms are strictly dimensionless and so the reaction rate, and therefore the velocity constant k, and therefore A, has the units of frequency (t^-1).

OK, so back again to the Arrhenius equation

arr09

We have determined the dimensions of A; now let us turn our attention to the role of the dimensionless exponential factor. The values this term may take range between 0 and 1, and specifically when E = 0, e^(–E/RT) = 1. This allows us to assign a physical meaning to A since when E = 0, A = k. We can think of A as the velocity constant when the activation energy is zero – in other words when each collision between reactant molecules results in a reaction taking place.

Since there are zillions of molecular collisions taking place every second just at room temperature, any reaction in these circumstances would be uber-explosive. So the exponential term can be seen as a modifier of A whose value reflects the range of reaction velocity from extremely slow at one end of the scale (high E/low T) to extremely fast at the other (low E/high T).

– – – –

© P Mander September 2016

Advertisements

CarnotCycle is a thermodynamics blog but occasionally its enthusiasm spills over into other subjects, as is the case here.
– – – –

When one considers the great achievements in radioactivity research made at the start of the 20th century by Ernest Rutherford and his team at the Victoria University, Manchester it seems surprising how little progress they made in finding an answer to the question posed above.

They knew that radioactivity was unaffected by any agency applied to it (even temperatures as low as 20K), and since the radioactive decay law discovered in 1902 by Rutherford and Soddy was an exponential function associated with probabilistic behavior, it was reasonable to think that radioactivity might be a random process. Egon von Schweidler’s work pointed firmly in this direction, and the Geiger-Nuttall relation, formulated by Hans Geiger and John Nuttall at the Manchester laboratory in 1911 and reformulated in 1912 by Richard Swinne in Germany, laid a mathematical foundation on which to construct ideas. Yet despite these pointers, Rutherford wrote in 1912 that “it is difficult to offer any explanation of the causes operating which lead to the ultimate disintegration of the atom”.

The phrase “causes operating which lead to” indicates that Rutherford saw the solution in terms of cause and effect. Understandably so, since he came from an age where probability was regarded as a measure of uncertainty about exact cause, rather than something reflecting a naturally indeterministic process. C.P. Snow once said of Rutherford, “He thought of atoms as though they were tennis balls”. And therein lay the essence of his problem: he didn’t have the right kind of mind to answer this kind of question.

But someone else did, namely the pioneer who introduced the term radioactivity and gave it a quantifiable meaning – Maria Sklodowska, better known under her married name Marie Curie.

– – – –

Mme. Curie’s idea

The 2nd Solvay Conference (1913) La structure de la matière (The structure of matter)

When all the great men of science (and one woman) convened for the second Solvay Conference in 1913, the hot topic of the day was the structure of the atom. Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab had recently conducted their famous particle scattering experiment, enabling Rutherford to construct a model of the atom with a central nucleus where its positive charge and most of its mass were concentrated. Rutherford and his student Thomas Royds had earlier conducted their celebrated experiment which identified the alpha particle as a helium nucleus, so the attention now focused on trying to explain the process of alpha decay.

It was Marie Curie who produced the most fruitful idea, foreshadowing the quantum mechanical interpretation developed in the 1920s. Curie suggested that alpha decay could be likened to a particle bouncing around inside a box with a small hole through which the particle could escape. This would constitute a random event; with a large number of boxes these events would follow the laws of probability, even though the model was conceptually based on simple kinetics.

Now it just so happened that a probability distribution based on exactly this kind of random event had already been described in an academic paper, published in 1837 and rather curiously entitled Recherches sur la probabilité des jugements en matière criminelle et matière civile (Research on the probability of judgments in criminal and civil cases). The author was the French mathematician Siméon Denis Poisson (1781-1840).

– – – –

The Poisson distribution

At the age of 57, just three years before his death, Poisson turned his attention to the subject of court judgements, and in particular to miscarriages of justice. In probabilistic terms, Poisson was considering a large number of trials (excuse the pun) involving just two outcomes – a correct or an incorrect judgement. And with many years of court history on the public record, Poisson had the means to compute a time-averaged figure for the thankfully rare judicial failures.

In his 1837 paper Poisson constructed a model which regarded an incorrect judgement as a random event which did not influence any other subsequent judgement – in other words it was an independent random event. He was thus dealing with a random variable in the context of a binomial experiment with a large number of trials (n) and a small probability (p), whose product (pn) he asserted was finite and equal to µ, the mean number of events occurring in a given number of dimensional units (in this case, time).

In summary, Poisson started with the binomial probability distribution

where p is the probability of success and q is the probability of failure, in which successive terms of the binomial expansion give the probability of the event occurring exactly r times in n trials

Asserting µ = pn, he evaluated P(r) as n goes to infinity and found that

This is the general representation of each term in the Poisson probability distribution

which can be seen from

As indicated above, the mean µ is the product of the mean per unit dimension and the number of dimensional units. In the case of radioactivity, µ = λt where λ is the decay constant and t is the number of time units

If we set t equal to the half-life t½ the mean µ will be λt½ = ln 2. Mapping probabilities for the first few terms of the distribution yields

Unlike the binomial distribution, the Poisson distribution is not symmetric; the maximum does not correspond to the mean. In the case of µ = ln2 the probability of no decays (r = 0) is exactly a half, as can be seen from

At this point we turn to another concept introduced by Poisson in his paper which was taken further by the Russian mathematician P.L. Chebyshev – namely the law of large numbers. In essence, this law says that if the probability of an event is p, the average number of occurrences of the event approaches p as the number of independent trials increases.

In the case of radioactive decay, the number of independent trials (atoms) is extremely large: a µg sample of Cesium 137 for example will contain around 10^15 nuclei. In the case of µ = ln2 the law of large numbers means that the average number of atoms remaining intact after the half-life period will be half the number of atoms originally present in the sample.

The Poisson distribution correctly accounts for half-life behavior, and has been successfully applied to counting rate experiments and particle scattering. There is thus a body of evidence to support the notion that radioactive decay is a random event to which the law of large numbers applies, and is therefore not a phenomenon that requires explanation in terms of cause and effect.

– – – –

Geiger and Nuttall

Despite Ernest Rutherford’s protestations that atomic disintegration defied explanation, it was in fact Rutherford who took the first step along the path that would eventually lead to a quantum mechanical explanation of α-decay. In 1911 and again in 1912, Rutherford communicated papers by two of his Manchester co-workers, physics lecturer Hans Geiger (of Geiger counter fame) and John Nuttall, a graduate student.

Rutherford’s team at the Physical Laboratories was well advanced with identifying radioactive decay products, several of which were α-emitters. It had been noticed that α-emitters with more rapid decay rates had greater α-particle ranges. Geiger and Nuttall investigated this phenomenon, and when they plotted the logarithms of the decay constants (they called them transformation constants) against the logarithms of the corresponding α-particle ranges for decay products in the uranium and actinium series they got this result (taken from their 1911 paper):

This implies the existence of a relationship log λ = A + B log R, where A has a characteristic value for each series and B has the same value for both series. Curiously, Geiger and Nuttall did not express the straight lines in mathematical terms in either of their papers; they were more interested in using the lines to calculate the immeasurably short periods of long-range α-emitters. But they made reference in their 1912 paper to somebody who had “recently shown that the relation between range and transformation constant can be expressed in another form”.

That somebody was the German physicist Richard Swinne (1885-1939) who sent a paper entitled Über einige zwischen den radioaktiven Elementen bestehene Beziehungen (On some relationships between the radioactive elements) to Physikalische Zeitschrift, which the journal received on Tuesday 5th December 1911 and published in volume XIII, 1912.

The other form that Swinne had found, which he claimed to represent the experimental data at least as well as the (unstated) formula of Geiger and Nuttall, was log λ = a + bv^n, where a and b are constants and v is the particle velocity.

When it came to n, Swinne was rangefinding: he tried various values of n and found that “n kann am besten gleich 1 gesetzt werden”; he was thus edging towards what we now call the Geiger-Nuttall law, namely that the logarithm of the α-emitter’s half-life is inversely proportional to the square root of the α-particle’s kinetic energy

– – – –

Gurney and Condon, and Gamow

The potential well diagram in Gurney and Condon’s article

In 1924, the British mathematician Harold Jeffreys developed a general method of approximating solutions to linear, second-order differential equations. This method, rediscovered as the WKB approximation in 1926, was applied to the Schrödinger equation first published in that year and resulted in the discovery of the phenomenon known as quantum tunneling.

It was this strange effect, by which a particle with insufficient energy to surmount a potential barrier can effectively tunnel through it (the dotted line DB) that was seized upon in 1928 by Ronald Gurney and Edward Condon at Princeton – and independently by George Gamow at Göttingen – as a way of explaining alpha decay. Gurney and Condon’s explanation of alpha emission was published in Nature in an article entitled Wave Mechanics and Radioactive Disintegration, while Gamow’s considerably more academic (and mathematical) paper Zur Quantentheorie des Atomkernes was published in Zeitschrift für Physik.

In the quantum mechanical treatment, the overall rate of emission (i.e. the decay constant λ) is the product of a frequency factor f – the rate at which an alpha particle appears at the inside wall of the nucleus – multiplied by a transmission coefficient T, which is the (independent) probability that the alpha particle tunnels through the barrier. Thus

At this point it is instructive to recall Marie Curie’s particle-in-a-box idea, a concept which involves the product of two quantities: a large number of escape attempts and a small probability of escape.

The frequency factor f – or escape attempt rate – is estimated as the particle velocity v divided by the distance across the nucleus (2R) where R is the radius

Here, V0 is the potential well depth, Qα is the alpha particle kinetic energy and µ is the reduced mass. The escape attempt rate is quite large, usually of the order of 1021 per second. By contrast the probability of alpha particle escape is extremely small. In calculating a value for T, Gamow introduced the Gamow factor 2G where

Typically the Gamow factor is very large (2G = 60-120) which makes T very small (T = 10-55-10-27).

Combining the equations

or

which is the Geiger-Nuttall law.

The work of Gurney, Condon and Gamow provided a convincing theoretical explanation of the Geiger-Nuttall law on the basis of quantum mechanics and Marie Curie’s hunch, and put an end to the classical notions of Rutherford’s generation that radioactive decay required explanation in terms of cause and effect.

So to return to the question posed at the head of this post – What determines the moment at which a radioactive atom decays? – the answer is chance. And the law of large numbers.

– – – –

An important consequence

George Gamow and John Cockroft

The successful application of quantum tunneling to alpha particle emission had an important consequence, since it suggested to Gamow that the same idea could be applied in reverse i.e. that projectile particles with lower energy might be able to penetrate the nucleus through quantum tunneling. This led Gamow to suggest to John Cockroft, who was conducting atom-smashing experiments with protons, that protons with more moderate speeds could be used. Gamow’s suggestion proved correct, and the success of these trials ushered in a new era of intensive development in nuclear physics.

– – – –

Links to original papers mentioned in this post

G. Gamow (1928) Zur Quantentheorie des Atomkernes, Zeitschrift für Physik; 51: 204-212
https://link.springer.com/article/10.1007/BF01343196

H. Geiger and J.M. Nuttall (1911) The ranges of the α particles from various radioactive substances and a relation between range and period of transformation, Phil Mag; 22: 613-621
https://archive.org/stream/londonedinburg6221911lond#page/612/mode/2up

H. Geiger and J.M. Nuttall (1912) The ranges of α particles from uranium, Phil Mag; 23: 439-445
https://archive.org/stream/londonedinburg6231912lond#page/438/mode/2up

R.W. Gurney and E.U. Condon (1928) Wave Mechanics and Radioactive disintegration, Nature; 122 (Sept. 22): 439
http://www.nature.com/physics/looking-back/gurney/index.html

R. Swinne (1912) Über einige zwischen den radioaktiven Elementen bestehene Beziehungen, Physikalische Zeitschrift; XIII: 14-21
https://babel.hathitrust.org/cgi/pt?id=mdp.39015023176806;view=1up;seq=52;size=125

– – – –

© P Mander August 2017

From the perspective of classical thermodynamics, osmosis has a rather unclassical history. Part of the reason for this, I suspect, is that osmosis was originally categorised under the heading of biology. I can remember witnessing the first practical demonstration of osmosis in a biology class, the phenomenon being explained in terms of pores (think invisible holes) in the membrane that were big enough to let water molecules through, but not big enough to let sucrose molecules through. It was just like a kitchen sieve, we were told. It lets the fine flour pass through but not clumps. This was very much the method of biology in my day, explaining things in terms of imagined mechanism and analogy.

And it wasn’t just in my day. In 1883, JH van ‘t Hoff, an able theoretician and one of the founders of the new discipline of physical chemistry, became suddenly convinced that solutions and gases obeyed the same fundamental law, pv = RT. Imagined mechanism swiftly followed. In van ‘t Hoff’s interpretation, osmotic pressure depended on the impact of solute molecules against the semipermeable membrane because solvent molecules, being present on both sides of the membrane through which they could freely pass, did not enter into consideration.

It all seemed very plausible, especially when van ‘t Hoff used the osmotic pressure measurements of the German botanist Wilhelm Pfeffer to compute the value of R in what became known as the van ‘t Hoff equation

ts04

where Π is the osmotic pressure, and found that the calculated value for R was almost identical with the familiar gas constant. There really did seem to be a parallelism between the properties of solutions and gases.

ae01

JH van ‘t Hoff (1852-1911)

The first sign that there was anything amiss with the so-called gaseous theory of solutions came in 1891 when van ‘t Hoff’s close colleague Wilhelm Ostwald produced unassailable proof that osmotic pressure is independent of the nature of the membrane. This meant that hypothetical arguments as to the cause of osmotic pressure, such as van ‘t Hoff had used as the basis of his theory, were inadmissible.

A year later, in 1892, van ‘t Hoff changed his stance by declaring that the mechanism of osmosis was unimportant. But this did not affect the validity of his osmotic pressure equation ΠV = RT. After all, it had been shown to be in close agreement with experimental data for very dilute solutions.

It would be decades – the 1930s in fact – before the van ‘t Hoff equation’s formal identity with the ideal gas equation was shown to be coincidental, and that the proper thermodynamic explanation of osmotic pressure lay elsewhere.

But long before the 1930s, even before Wilhelm Pfeffer began his osmotic pressure experiments upon which van ‘t Hoff subsequently based his ideas, someone had already published a thermodynamically exact rationale for osmosis that did not rely on any hypothesis as to cause.

That someone was the American physicist Josiah Willard Gibbs. The year was 1875.

gibbs

J. Willard Gibbs (1839-1903)

– – – –

Osmosis without mechanism

It is a remarkable feature of Gibbs’ On the Equilibrium of Heterogeneous Substances that having introduced the concept of chemical potential, he first considers osmotic forces before moving on to the fundamental equations for which the work is chiefly known. The reason is Gibbs’ insistence on logical order of presentation. The discussion of chemical potential immediately involves equations of condition, among whose different causes are what Gibbs calls a diaphragm, i.e. a semipermeable membrane. Hence the early appearance of the following section

to02

In equation 77, Gibbs presents a new way of understanding osmotic pressure. He makes no hypotheses about how a semipermeable membrane might work, but simply states the equations of condition which follow from the presence of such a membrane in the kind of system he describes.

This frees osmosis from considerations of mechanism, and explains it solely in terms of differences in chemical potential in components which can pass the diaphragm while other components cannot.

In order to achieve equilibrium between say a solution and its solvent, where only the solvent can pass the diaphragm, the chemical potential of the solvent in the fluid on both sides of the membrane must be the same. This necessitates applying additional pressure to the solution to increase the chemical potential of the solvent in the solution so it equals that of the pure solvent, temperature remaining constant. At equilibrium, the resulting difference in pressure across the membrane is the osmotic pressure.

Note that increasing the pressure always increases the chemical potential since

to03

is always positive (V1 is the partial molar volume of the solvent in the solution).

– – – –

Europe fails to notice (almost)

Gibbs published On the Equilibrium of Heterogeneous Substances in Transactions of the Connecticut Academy. Choosing such an obscure journal (seen from a European perspective) clearly would not attract much attention across the pond, but Gibbs had a secret weapon. He had a mailing list of the world’s greatest scientists to which he sent reprints of his papers.

One of the names on that list was James Clerk Maxwell, who instantly appreciated Gibbs’ work and began to promote it in Europe. On Wednesday 24 May 1876, the year that ‘Equilibrium’ was first published, Maxwell gave an address at the South Kensington Conferences in London on the subject of Gibbs’ development of the doctrine of available energy on the basis of his new concept of the chemical potentials of the constituent substances. But the audience did not share Maxwell’s enthusiasm, or in all likelihood share his grasp of Gibbs’ ideas. When Maxwell tragically died three years later, Gibbs’ powerful ideas lost their only real champion in Europe.

It was not until 1891 that interest in Gibbs masterwork would resurface through the agency of Wilhelm Ostwald, who together with van ‘t Hoff and Arrhenius were the founders of the modern school of physical chemistry.

ts07

Wilhelm Ostwald (1853-1932) He not only translated Gibbs’ masterwork into German, but also produced a profound proof – worthy of Sadi Carnot himself – that osmotic pressure must be independent of the nature of the semipermeable membrane.

Although perhaps overshadowed by his colleagues, Ostwald had a talent for sensing the direction that the future would take and was also a shrewd judge of intellect – he instinctively felt that there were hidden treasures in Gibbs’ magnum opus. After spending an entire year translating ‘Equilibrium’ into German, Ostwald wrote to Gibbs:

“The translation of your main work is nearly complete and I cannot resist repeating here my amazement. If you had published this work over a longer period of time in separate essays in an accessible journal, you would now be regarded as by far the greatest thermodynamicist since Clausius – not only in the small circle of those conversant with your work, but universally—and as one who frequently goes far beyond him in the certainty and scope of your physical judgment. The German translation, hopefully, will more secure for it the general recognition it deserves.”

The following year – 1892 – another respected scientist sent a letter to Gibbs regarding ‘Equilibrium’. This time it was the British physicist, Lord Rayleigh, who asked Gibbs:

“Have you ever thought of bringing out a new edition of, or a treatise founded upon, your “Equilibrium of Het. Substances.” The original version though now attracting the attention it deserves, is too condensed and too difficult for most, I might say all, readers. The result is that as has happened to myself, the idea is not grasped until the subject has come up in one’s own mind more or less independently.”

Rayleigh was probably just being diplomatic when he remarked that Gibbs’ treatise was ‘now attracting the attention it deserves’. The plain fact is that nobody gave it any attention at all. Gibbs and his explanation of osmosis in terms of chemical potential was passed over, while European and especially British theoretical work centered on the more familiar and more easily understood concept of vapor pressure.

– – – –

Gibbs tries again

Although van ‘t Hoff’s osmotic pressure equation ΠV = RT soon gained the status of a law, the gaseous theory that lay behind it remained clouded in controversy. In particular, van ‘t Hoff’s deduction of the proportionality between osmotic pressure and concentration was an analogy rather than a proof, since it made use of hypothetical considerations as to the cause of osmotic pressure. Following Ostwald’s proof that these were inadmissible, the gaseous theory began to look hollow. A better theory was needed.

to04

Lord Kelvin (1824-1907) and Lord Rayleigh (1842-1919)

This was provided in 1896 by the British physicist, Lord Rayleigh, whose proof was free of hypothesis but did make use of Avogadro’s law, thereby continuing to assert a parallelism between the properties of solutions and gases. Heavyweight opposition to this soon materialized from the redoubtable Lord Kelvin. In a letter to Nature (21 January 1897) he charged that the application of Avogadro’s law to solutions had “manifestly no theoretical foundation at present” and further contended that

“No molecular theory can, for sugar or common salt or alcohol, dissolved in water, tell us what is the true osmotic pressure against a membrane permeable to water only, without taking into account laws quite unknown to us at present regarding the three sets of mutual attractions or repulsions: (1) between the molecules of the dissolved substance; (2) between the molecules of water; (3) between the molecules of the dissolved substance and the molecules of water.”

Lord Kelvin’s letter in Nature elicited a prompt response from none other than Josiah Willard Gibbs in America. Twenty-one years had now passed since James Clerk Maxwell first tried to interest Europe in the concept of chemical potentials. In Kelvin’s letter, with its feisty attack on the gaseous theory, Gibbs saw the opportunity to try again.

In his letter to Nature (18 March 1897), Gibbs opined that “Lord Kelvin’s very interesting problem concerning molecules which differ only in their power of passing a diaphragm, seems only to require for its solution the relation between density and pressure”, and highlighted the advantage of using his potentials to express van ‘t Hoff’s law:

“It will be convenient to use certain quantities which may be called the potentials of the solvent and of the solutum, the term being thus defined: – In any sensibly homogeneous mass, the potential of any independently variable component substance is the differential coefficient of the thermodynamic energy of the mass taken with respect to that component, the entropy and volume of the mass and the quantities of its other components remaining constant. The advantage of using such potentials in the theory of semi-permeable diaphragms consists partly in the convenient form of the condition of equilibrium, the potential for any substance to which a diaphragm is freely permeable having the same value on both sides of the diaphragm, and partly in our ability to express van’t Hoff law as a relation between the quantities characterizing the state of the solution, without reference to any experimental arrangement.”

But once again, Gibbs and his chemical potentials failed to garner interest in Europe. His timing was also unfortunate, since British experimental research into osmosis was soon to be stimulated by the aristocrat-turned-scientist Lord Berkeley, and this in turn would stimulate a new band of British theoreticians, including AW Porter and HL Callendar, who would base their theoretical efforts firmly on vapor pressure.

– – – –

Things Come Full Circle

As the new century dawned, van ‘t Hoff cemented his reputation with the award of the very first Nobel Prize for Chemistry “in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions”.

The osmotic pressure law was held in high esteem, and despite Lord Kelvin’s protestations, Britain was well disposed towards the Gaseous Theory of Solutions. The idea circulating at the time was that the refinements of the ideal gas law that had been shown to apply to real gases, could equally well be applied to more concentrated solutions. As Lord Berkeley put it in the introduction to a paper communicated to the Royal Society in London in May 1904:

“The following work was undertaken with a view to obtaining data for the tentative application of van der Waals’ equation to concentrated solutions. It is evidently probable that if the ordinary gas equation be applicable to dilute solutions, then that of van der Waals, or one of analogous form, should apply to concentrated solutions – that is, to solutions having large osmotic pressures.”

Lord Berkeley’s landmark experimental studies on the osmotic pressure of concentrated solutions called renewed attention to the subject among theorists, who now had some fresh and very accurate data to work with. Alfred Porter at University College London attempted to make a more complete theory by considering the compressibility of a solution to which osmotic pressure was applied, while Hugh Callendar at Imperial College London combined the vapor pressure interpretation of osmosis with the hypothesis that osmosis could be described as vapor passing through a large number of fine capillaries in the semipermeable membrane. This was in 1908.

to05

H L Callendar (1863-1930)

So seventeen years after Wilhelm Ostwald conclusively proved that hypothetical arguments as to the cause of osmotic pressure were inadmissible, things came full circle with hypothetical arguments once more being advanced as to the cause of osmotic pressure.

And as for Gibbs, his ideas were as far away as ever from British and European Science. The osmosis papers of both Porter (1907) and Callendar (1908) are substantial in referenced content, but nowhere do either of them make any mention of Gibbs or his explanation of osmosis on the basis of chemical potentials.

There is a special irony in this, since in Callendar’s case at least, the scientific papers of J Willard Gibbs were presumably close at hand. Perhaps even on his office bookshelf. Because that copy of Gibbs’ works shown in the header photo of this post – it’s a 1906 first edition – was Hugh Callendar’s personal copy, which he signed on the front endpaper.

to06

Hugh Callendar’s signature on the endpaper of his personal copy of Gibbs’ Scientific Papers, Volume 1, Thermodynamics.

– – – –

Epilogue

Throughout this post, I have made repeated references to that inspired piece of thinking by Wilhelm Ostwald which conclusively demonstrated that osmotic pressure must be independent of the nature of the membrane.

Ostwald’s reasoning is so lucid and compelling, that one wonders why it didn’t put an end to speculation on osmotic mechanisms. But it didn’t, and hasn’t, and probably won’t.

Here is how Ostwald presented the argument in his own Lehrbuch der allgemeinen Chemie (1891). Enjoy.

ts08

“… it may be stated with certainty that the amount of pressure is independent of the nature of the membrane, provided that the membrane is not permeable by the dissolved substance. To understand this, let it be supposed that two separating partitions, A and B, formed of different membranes, are placed in a cylinder (fig. 17). Let the space between the membranes contain a solution and let there be pure water in the space at the ends of the cylinder. Let the membrane A show a higher pressure, P, and the membrane B show a smaller pressure, p. At the outset, water will pass through both membranes into the inner space until the pressure p is attained, when the passage of water through B will cease, but the passage through A will continue. As soon as the pressure in the inner space has been thus increased above p, water will be pressed out through B. The pressure can never reach the value P; water must enter continuously through A, while a finite difference of pressures is maintained. If this were realised we should have a machine capable of performing infinite work, which is impossible. A similar demonstration holds good if p>P ; it is, therefore, necessary that P=p; in other words, it follows necessarily that osmotic pressure is independent of the nature of the membrane.”

(English translation by Matthew Pattison Muir)

– – – –

P Mander July 2015

William Nicholson and Anthony Carlisle

May 1800: Carlisle (left) and Nicholson discover electrolysis

The two previous posts on this blog concerning the leaking of details about the newly-invented Voltaic pile to Anthony Carlisle and William Nicholson, and their subsequent discovery of electrolysis, are more about the path of temptation and birth of electrochemistry than about classical thermodynamics. In fact there was no thermodynamic content at all.

So by way of steering this set of posts back on track, I thought I would apply contemporary thermodynamic knowledge to Carlisle and Nicholson’s 18th century activities, in order to give another perspective to their famous experiments.

– – – –

The Voltaic pile

cn02

Z = zinc, A = silver

In thermodynamic terms, Alessandro Volta’s fabulous invention – an early form of battery – is a system capable of performing additional work other than pressure-volume work. The extra capability can be incorporated into the fundamental equation of thermodynamics by adding a further generalised force-displacement term: the intensive variable is the electrical potential E, whose conjugate extensive variable is the charge Q moved across that potential

tcn01

hence

tcn02

At constant temperature and pressure, the left hand side identifies with dG. For an appreciable difference therefore

tcn03

where E is the electromotive force of the cell, Q is the charge moved across the potential, and ΔGrxn is the free energy change of the reaction taking place in the battery.

For one mole of reaction, Q = nF where n is the number of moles of electrons transferred per mole of reaction, and F is the total charge on a mole of electrons, otherwise known as the Faraday. For a reaction to occur spontaneously at constant temperature and pressure, ΔGrxn must be negative and so the EMF must be positive. Under standard conditions therefore

tcn04

The redox reaction which took place in the Voltaic pile constructed by Carlisle and Nicholson was

tcn05

ΔG0rxn for this reaction is –146.7 kJ/mole, and n=2, giving an EMF of 0.762 volts.

We know from Nicholson’s published paper that their first Voltaic pile consisted of “17 half crowns, with a like number of pieces of zinc”. We also know that Volta’s method of constructing the pile – which Carlisle and Nicholson followed – resulted in the uppermost and lowest discs acting merely as conductors for the adjoining discs. Thus there were not 17, but 16 cells in Carlisle and Nicholson’s first Voltaic pile, giving a total EMF of 12.192 volts.

– – – –

External work

On May 1st, 1800, Carlisle and Nicholson set up their Voltaic pile, gave themselves an obligatory electric shock, and then began experiments with an electrometer which showed “that the action of the instrument was freely transmitted through the usual conductors of electricity, but stopped by glass and other non-conductors.”

Electrical contact with the pile was assisted by placing a drop of water on the uppermost disc, and it was this action which opened the path to discovery. Nicholson records in his paper that at an early stage in these experiments, “Mr. Carlisle observed a disengagement of gas round the touching wire. This gas, though very minute in quantity, evidently seemed to me to have the smell afforded by hydrogen”.

The fact that gas was formed “round the touching wire” indicates that the contact was intermittent: when the wire was in contact with the water drop but not the zinc disc, a miniature electrolytic cell was formed and hydrogen gas was evolved at the wire cathode, while at the anode the zinc conductor was immediately oxidised as soon as the oxygen gas was formed.

In thermodynamic terms, the electrochemical cells in the pile were being used to do external work on the electrolytic cell in which the decomposition of water took place

tcn06

ΔG0rxn for this reaction is +237.2 kJ/mole. So it can be seen that the external work done by the pile consists of driving what is in effect the combustion of hydrogen in a backwards direction to recover the reactants.

– – – –

Intuitive

Carlisle and Nicholson were intuitive physical chemists. They knew that water was composed of two gases, hydrogen and oxygen, so when bubbles which smelled of hydrogen were observed in their first experiment, it immediately set them thinking. Nicholson wrote of being “led by our reasoning on the first appearance of hydrogen to expect a decomposition of water.”

cn06

William Nicholson (1753-1815)

Nicholson used the term decomposition, so it seems safe to assume they formed the notion that just as water is composed from its constituent gases, it can be decomposed to recover them. That is a powerful conception, the idea that the combustion of hydrogen is a reversible process.

Whether Carlisle and Nicholson extended this thought to other chemical reactions, or even to chemical reactions in general, we do not know. But their demonstration of reversibility, beneath which the principle of chemical equilibrium lies, was an achievement of perhaps even greater moment than the discovery of electrolysis by which they achieved it.

vol04

Anthony Carlisle (1768-1840)

– – – –

Redox reactions

Carlisle and Nicholson’s discovery of electrolysis was made possible by the fact that the decomposition of water into hydrogen and oxygen is a redox reaction. In fact every reaction that takes place in an electrolytic cell is a redox reaction, with oxidation taking place at the anode and reduction taking place at the cathode. The overall electrolytic reaction is thus divided into two half-reactions. In the case of the electrolysis of water, we have

tcn07

These combined half-reactions are not spontaneous. To facilitate this redox process requires EQ work, which in Carlisle and Nicholson’s case was supplied by the Voltaic pile.

Redox reactions also take place in every voltaic cell, with oxidation at the anode and reduction at the cathode. The difference is that the combined half-reactions are spontaneous, thereby making the cell capable of performing EQ work.

The spontaneous redox reactions in voltaic cells, and the non-spontaneous redox reactions in electrolytic cells, can best be understood by looking at a table of standard oxidation potentials arranged in descending order, such as the one shown below. Using such a list, the EMF of the cell is calculated by subtracting the cathode potential from the anode potential.

[Note that if you use a table of standard reduction potentials, the signs are reversed and the EMF of the cell is calculated by subtracting the anode potential from the cathode potential.]

For voltaic cells, the half-reaction taking place left-to-right at the anode (oxidation) appears higher in the list than the half-reaction taking place right-to-left at the cathode (reduction). The EMF of the cell is positive, and so ΔG will be negative, meaning that the cell reaction is spontaneous and thus capable of performing EQ work.

The situation is reversed for electrolytic cells. The half-reaction taking place left-to-right at the anode (oxidation) appears lower in the list than the half-reaction taking place right-to-left at the cathode (reduction). The EMF of the cell is negative, and so ΔG will be positive, meaning that the cell reaction is non-spontaneous and that EQ work must be performed on the cell to facilitate electrolysis.

The half-reactions of Carlisle and Nicholson’s Voltaic pile, and their platinum-electrode electrolytic cell, are indicated in the table below.

tcn08

Table of standard oxidation potentials

– – – –

The advent of the fuel cell

Anthony Carlisle and William Nicholson

If Carlisle and Nicholson had disconnected their platinum-wire electrolytic cell after bubbles of hydrogen and oxygen had formed on the respective electrodes, and then connected an electrometer across the wires, they would have added yet another momentous discovery to that of electrolysis. They would have discovered the fuel cell.

From a thermodynamic perspective, it is a fairly straightforward matter to comprehend. Under ordinary temperature and pressure conditions, the decomposition of water is a non-spontaneous process; work is required to drive the reaction shown below in the non-spontaneous direction. This work was provided by the Voltaic pile, the effect of which was to increase the Gibbs free energy of the reaction system.

tcn10

Upon disconnection of the Voltaic pile, and the substitution of a circuit wire, the reaction would spontaneously proceed in the reverse direction, decreasing the Gibbs free energy of the reaction system. This system would be capable of performing EQ work.

The reversal of reaction direction transforms the electrolytic cell into a voltaic cell, whose arrangement can be written

H2(g)/Pt | electrolyte | Pt/O2(g)

As can be seen from the above table, the EMF of this voltaic cell is 1.229 volts. We know it today as the hydrogen fuel cell.

Carlisle and Nicholson most surely created the first fuel cell in May 1800. They just didn’t apprehend it, nor did they operate it as a voltaic cell – at least we have no record that they did. So we must classify Carlisle and Nicholson’s fuel cell as an overlooked actuality; an unnoticed birth.

It would take another 42 years before a barrister from the city of Swansea in Wales, William Robert Grove QC, developed the first operational fuel cell, whose essential design features can clearly be traced back to Carlisle and Nicholson’s original.

– – – –

Mouse-over link to the original papers mentioned in this post

Nicholson’s paper (begins on page 179)

– – – –

P Mander September 2015


CarnotCycle would like to say thank you to everyone who has visited this blog since its inception in August 2012.

Thermodynamics may be a niche topic on WordPress, but it’s a powerful subject with global appeal. CarnotCycle’s country statistics show that thermodynamics interests many, many people. They come to this blog from all over the world, and they keep coming.

It’s wonderful to see all this activity, but perhaps not so surprising. After all, thermodynamics has played – and continues to play – a major role in shaping our world. It can be a difficult subject, but time spent learning about thermodynamics is never wasted. It enriches knowledge and empowers the mind.

– – – –