In the study of chemical reactions, thermodynamics enables us to calculate changes in state functions such as enthalpy, entropy and free energy, and determine the direction in which a reaction is spontaneous. But it tells us nothing about the speed of reaction; that is the province of chemical kinetics. Thermodynamics and chemical kinetics can be viewed as complementary disciplines, which together provide the means by which the course of a reaction can be elucidated.

A classic case which exemplifies the dual application of thermodynamics and chemical kinetics is the Tottenham Court Road gas explosion which occurred in July 1880.

**– – – –**

**The incident**

It was a time of great expansion of the network for gas pipeline transport in London. Gas lighting of streets and buildings was well-established, but now the gas stove was about to become a commercial success, and new gas mains were being laid to supply the anticipated demand.

The Gas Light and Coke Company, which supplied coal gas from a number of gasworks in London, had laid a new 1.2 kilometer (0.75 mile) section of main from Bedford Square to Fitzroy Square, the pipeline crossing Tottenham Court Road at the junction with Bayley Street and running along Percy Street before turning north along the entire length of Charlotte Street.

On the evening of Monday 5th July 1880, workmen were preparing to connect the new main to the existing network at Bayley Street. Unknown to them however, a faulty valve at the other end of the new main was leaking coal gas, which had mingled with the air in the pipe to form an explosive mixture. In a presumed act of carelessness by one of the workmen at Bayley Street, a flame or other ignition source came in close proximity to the pipe.

The gas mixture detonated and the explosion ripped through the entire length of the new 1.2 kilometer main. A number of people were killed and injured in the blast, and 400 houses were damaged by flying debris. The entire incident lasted about 12 seconds.

**– – – –**

**The investigation**

A singularly worrying feature of the Tottenham Court Road gas explosion was that it had ripped through over a kilometer of pipeline in a matter of seconds. How could this happen? And how easily could this happen again? For the safety of millions of Londoners, answers had to be found.

The authorities turned to one the country’s leading chemists, Augustus Vernon Harcourt, who was conducting a program of research in chemical kinetics at Oxford University. Together with his student Harold Baily Dixon (1852-1930), Harcourt began to investigate the rates of propagation of gaseous explosions.

In what sounds like a rather risky experiment, they set up long metal pipes under the Dining Hall of Balliol College Oxford to measure the speed with which explosion waves travel when a mixture of air and coal gas detonates.

Twenty three years earlier, the German chemist Robert Bunson (of Bunsen burner fame) had investigated the rate of propagation for the ignition of coal gas and oxygen and concluded that the flame front velocity was less than 1 meter per second. From the experiments at Balliol however, Harcourt and Dixon arrived at a very different answer. In a report to the Board of Trade on the Tottenham Court Road blast, Harcourt concluded that the velocity of a coal gas/air explosion wave exceeded 100 yards per second (91 meters per second).

From the safety point of view, Harcourt and Dixon had shown how absolutely essential it was to prevent air becoming mixed with coal gas in the gas pipeline network. But it would take decades before sufficient theoretical progress was made to allow a detailed understanding of what exactly happened in the great gas explosion of 1880.

**– – – –**

**Branching chains**

The development of chemical kinetics involved many different contributors in the decades after Harcourt and Dixon’s pioneering work at Oxford. Theories were advanced on several different aspects of the subject, but one piece of theoretical work had particular relevance to the study of explosions.

In 1921, a Danish physical chemist by the name of Jens Anton Christiansen (1888-1969) completed his PhD studies in reaction kinetics at Copenhagen University. In his thesis he incorporated an idea first suggested by Bodenstein in 1913 and introduced the term “kædereaktion”. This term, and the conceptual idea behind it, attracted considerable attention and the equivalent English expression “chain reaction” came into use. Two years later, Christiansen and the Dutch physicist Hendrick Anthony Kramers (1894-1952) published a paper in which they suggested the possibility of branching chains. Their idea was that a chain reaction could involve steps in which one chain carrier (an atom or radical) might not only regenerate itself but also produce an additional chain carrier. If such chain branching occurred, the number of chain carriers could increase extremely rapidly and result in an explosion.

The idea proved to be well-founded, and was further developed by Nikolai Semyonov (1896-1986) and Cyril Norman Hinshelwood (1897-1967). Their work also showed that chain carriers were removed at the walls of the reaction vessel. If the rate of removal of the chain carriers was fast enough to counteract the effect of chain branching, a steady reaction ensued. But if the removal rate could not keep pace with the chain branching rate, an explosion would result.

On the basis of their thinking, the reaction rate expression assumed the form

where F is a function of the concentrations characteristic of the chain branching step, f_{a} is a function determining the removal of chain carriers, and f_{b} is a function expressing the branching nature of the chain reaction.

In steady reaction conditions, f_{a} is sufficiently greater than f_{b}. But if conditions change so that f_{a} and f_{b} converge, a point will be reached where the difference between them becomes vanishingly small. The reaction rate will soar towards infinity however small F may be, and the evolution of heat in the system will be so great as to cause an explosion.

**– – – –**

**Piecing the facts together**

From the information contained in newspaper reports, and the application of kinetic theory and thermodynamics, it is possible to arrive at a likely explanation of why the great gas explosion of 1880 happened in the way it did.

It is known that coal gas leaked into the newly laid main at its northern end, and that detonation occurred at the other end in Bayley Street. From this it can be inferred that the entire pipeline between these two points contained coal gas admixed with the air that the pipe originally contained. On the assumption that the leaking valve was introducing coal gas at a modest and steady rate, it is likely that the partial pressures of the gases in the pipe were being brought into equilibrium as the coal gas seeped along the pipe.

Newspaper reports stated that the new main between Bayley Street and Fitzroy Square was a metal pipe of fixed (3 ft/0.91 m) diameter. The ratio of the surface area to the enclosed volume or, which is the same thing, the ratio of the circumference to the cross-sectional area

was therefore constant along its length*.

*assuming the geometry of the bend had no effect on f_{a}. This point is examined later.

At the moment of detonation at Bayley Street, it is a reasonable hypothesis that the function F in the Semyonov-Hinshelwood rate expression was not subject to large variations along the length of the new main. The same can be said of f_{b}, and since the ratio of the circumference to the cross-sectional area of the pipeline was constant, the function f_{a} determining the removal of chain carriers at the walls of the pipe was also constant. In short, the reaction rate expression applying at the end of the pipe – where detonation is known to have occurred – applied at every other point along its length.

At this juncture, it is convenient to recall the combustion reactions of the principal components of coal gas, namely hydrogen, methane and carbon monoxide:

We observe that from a stoichiometric perspective, none of the reactions involves an increase in volume; in fact two of them result in a decrease. The overall entropy of reaction is negative, and this tells us that the conversion of reactants into products, however rapidly it took place, could not in itself have resulted in any pressure increase under the constant volume conditions of the pipe.

From an enthalpy of reaction perspective however, the situation is very different. The above reactions are all significantly exothermic processes – the calorific value of coal gas is typically around 20 megajoules per cubic meter. In the circumstances of detonation, the virtually instantaneous release of a large amount of heat would result in a similarly rapid rise in temperature, causing sudden compression of the adjacent volume element in the pipe and heating it to the point of detonation. This sequence would be repeated from one volume element to the next, with a wave of adiabatic compression intensifying the pressure as it traversed the pipe. A continuously propagating explosion would then follow the pressure wave along the course of the main as the pipe ruptured.

**– – – –**

**The bend in the pipe**

The junction of Percy Street with Charlotte Street was the only point along the entire length of the new main which deviated from a straight line. Here the pipeline executed a 90 degree turn, and it raises the question of how a detonation wave can go round corners. The exact construction of the bend is not recorded, but it is likely that an elbow joint was used.

Geometrically, the bend itself is a quadrant of a torus, whose geometry is such that regardless of whether the elbow has a long or short major radius R, the ratio of the surface area to the enclosed volume is constant

This is the same ratio as that of the straight pipe. The bend at the junction of Percy Street with Charlotte Street introduced no changes to the f_{a} term in the Semyonov-Hinshelwood rate expression, and thus the conditions for detonation were met at every point of the bend.

So the 90 degree elbow made no difference to the detonation wave. It simply turned sharp right and carried on up to Fitzroy Square, at a velocity of almost 100 meters per second.

**– – – –**

**Estimating the power of the explosion**

It is known from the analysis of coal gas that one volume of coal gas requires approximately 10 volumes of air for its complete combustion. This means that an explosive mixture with air cannot be formed at coal gas concentrations much above 9%, since there would be insufficient oxygen to support the necessary rate of reaction. Below 7% coal gas concentration, the mixture is also non-explosive, for other reasons.

An average coal gas concentration of 8% throughout the pipeline is therefore a fair estimate, and seems plausible given that the new main contained air when laid and that coal gas was introduced at a modest rate from a leaking valve. We know that the new 1.2 kilometer main had a radius of 0,455 meters, giving a total volume of 780 cubic meters. At the moment of detonation, coal gas is estimated to have filled 8% of this volume i.e. 62 cubic meters. The calorific value of coal gas is typically 20 megajoules per cubic meter, so we can conclude that the Tottenham Court Road gas explosion released around 1,240 MJ in the 12 seconds it took to traverse the pipeline. The power of the explosion was therefore 1240/12 = 103 MW.

**– – – –**

**Contemporary accounts**

Newspaper accounts remarked on the rapid progression of the explosion, with one commenting:

*“[The main pipe at Bayley Street] burst with a terrific report, and sheets of flame issued suddenly from the earth. Instantly the report seemed to run along Percy Street, which was torn up for sixty or seventy yards (ca. 60 meters), the paving stones flying on each side against the houses.”*

*“At the corner of Charlotte Street the basements of two houses were shattered. The paving stones were here also sent into the air, falling on and through the roofs of the houses opposite. Further on, the pipe burst again, near the corner of Bennett Street, where there is a large gap in the roadway. Another burst-up occurred near the corner of Howland Street, and at the corner of London Street (now Maple Street) still further on…”*

One eye-witness was in Percy Street when the explosion occurred. He experienced the effect of not only the pressure wave from the bursting pipe, but also the decompression wave which followed in its wake:

*“I was walking down Percy Street, when I felt the ground shaking under my feet. I immediately saw the centre of the street rising in the air. A tremendous report followed, and then there was a shower of bricks and stones. I felt myself lifted from the ground, and the next moment I was lying among the debris at the bottom of a deep hole in the roadway.”*

**– – – –**

P Mander December 2015

]]>I can’t think of a better introduction to this post than Ludwig Boltzmann gave in his *Vorlesungen über Gastheorie* (Lectures on Gas Theory, 1896):

*“General thermodynamics proceeds from the fact that, as far as we can tell from our experiences up to now, all natural processes are irreversible. Hence according to the principles of phenomenology, the general thermodynamics of the second law is formulated in such a way that the unconditional irreversibility of all natural processes is asserted as a so-called axiom … [However] general thermodynamics (without prejudice to its unshakable importance) also requires the cultivation of mechanical models representing it, in order to deepen our knowledge of nature—not in spite of, but rather precisely because these models do not always cover the same ground as general thermodynamics, but instead offer a glimpse of a new viewpoint.”*

Today, the work of Ludwig Boltzmann (1844-1906) is considered among the finest in physics. But in his own lifetime he faced considerable hostility from those of his contemporaries who did not believe in the atomic hypothesis. As late as 1900, the kinetic-molecular theory of heat developed by Maxwell and Boltzmann was being vigorously attacked by a school of scientists including Wilhelm Ostwald, who argued that since mechanical processes are reversible and heat conduction is not, thermal phenomena cannot be explained in terms of hidden, internal mechanical variables.

Boltzmann refuted this argument. Mechanical processes, he pointed out, are irreversible if the number of particles is sufficiently large. The spontaneous mixing of two gases is a case in point; it is known from experience that the process cannot spontaneously reverse – mixed gases don’t unmix. Today we regard this as self-evident, but in Boltzmann’s time his opponents did not believe in atoms or molecules; they considered matter to be continuous. So the attacks on Boltzmann’s theories continued.

Fortunately, this did not deter Boltzmann from pursuing his ideas, at least not to begin with. He saw that spontaneous processes could be explained in terms of probability, and that a system of many particles undergoing spontaneous change would assume – other things being equal – the most probable state, namely the one with the maximum number of arrangements. And this gave him a new way of viewing the equilibrium state.

One can see Boltzmann’s mind at work, thinking about particle systems in terms of permutations, in this quote from his Lectures on Gas Theory:

*“From an urn, in which many black and an equal number of white but otherwise identical spheres are placed, let 20 purely random drawings be made. The case that only black spheres are drawn is not a hair less probable than the case that on the first draw one gets a black sphere, on the second a white, on the third a black, etc. The fact that one is more likely to get 10 black spheres and 10 white spheres in 20 drawings than one is to get 20 black spheres is due to the fact that the former event can come about in many more ways than the latter. The relative probability of the former event as compared to the latter is the number 20!/10!10!, which indicates how many permutations one can make of the terms in the series of 10 white and 10 black spheres, treating the different white spheres as identical, and the different black spheres as identical. Each one of these permutations represents an event that has the same probability as the event of all black spheres.”*

**– – – –**

By analyzing the ways in which systems of particles distribute themselves, and the various constraints to which particle assemblies are subject, important links came to be established between the statistical properties of assemblies and their bulk thermodynamic properties.

Boltzmann’s contribution in this regard is famously commemorated in the formula inscribed on his tombstone: S = k log W. There is powerful new thinking in this equation. While the classical thermodynamic definition of entropy by Rankine and Clausius was expressed in terms of temperature and heat exchange, Boltzmann gave entropy – and its tendency to increase in natural processes – a new explanation in terms of probability. If a particle system is not in its most probable state then it will change until it is, and an equilibrium state is reached.

**– – – –**

P Mander April 2016

]]>The scientific study of the atmosphere can be said to have begun in 1643 with the invention of the mercury barometer by Evangelista Torricelli (1608-1647). Although the phenomenon had been observed and discussed by others – including Galileo – in the preceding decade, it was Torricelli who provided the breakthrough in understanding.

The prevailing view at the time was that air was weightless and did not exert any pressure on the mercury in the bowl. Instead, it was thought that the vacuum above the liquid in the barometer tube exerted a force of attraction that held the liquid suspended in the tube.

Torricelli challenged this view by proposing the converse argument. He asserted that air did have weight, and that the atmosphere exerted pressure on the mercury in the bowl which balanced the pressure exerted by the column of mercury. The vacuum above the mercury in the closed tube, in Torricelli’s opinion, exerted no attractive force and had no role in supporting the column of mercury in the tube*.

The assertion that air had weight, Torricelli realized, could be tested. In elevated places like mountains the reduced weight of the overlying atmosphere would exert less pressure, so the corresponding height of the mercury column in the barometer tube should be lower. It seems that Torricelli did not have the opportunity in his short life to do this experiment, but in the year following his death the experiment was carried out in France at the behest of the scientific philosopher Blaise Pascal (1623-1662).

**CarnotCycle wonders if Torricelli tilted the barometer tube and observed the disappearance of the space above the mercury – see diagram below. This would have shown that something other than a vacuum held the liquid suspended in the tube.*

**– – – –**

**The Torricelli experiment**

In 1644 the French salon theorist Marin Mersenne (1588-1648) travelled to Italy where he learned of Torricelli’s barometer experiment. He brought news of the experiment back with him to Paris, where the young Blaise Pascal was a regular attendant at Mersenne’s salon meetings.

Pascal had moved to Paris from his childhood home of Clermont-Ferrand. The 1,465 meter high Puy de Dôme was a familiar feature in the landscape he knew as a youngster, and it provided an ideal means of testing Torricelli’s thesis. Pascal’s brother-in-law Florin Périer lived in Clermont-Ferrand, and after some friendly persuasion, Périer ascended Puy de Dôme with a Torricellian barometer, taking measurements as he climbed.

At the base of the mountain, Périer recorded a mercury column height of 26 inches and 3½ lines. He then asked a colleague to observe this barometer throughout the day to see if any change occurred, while he set off with another barometer to climb the mountain. At the summit he recorded a mercury column height of 23 inches and 2 lines, substantially less than the measurement taken 1,465 meters below, where the barometer had remained steady.

The Puy de Dôme experiment provided convincing evidence that it was the weight of air, and thus atmospheric pressure, that balanced the weight of the mercury column.

**– – – –**

**Measuring pressure**

When Florin Périer conducted the Torricelli experiment on Puy de Dôme in 1648, the measurements he recorded were the heights of mercury columns in barometer tubes. From these measurements, Blaise Pascal inferred a comparison of atmospheric pressures at the top and bottom of the mountain.

This experiment took place, we should remind ourselves, when Isaac Newton was only 5 years old and had not yet formulated his famous laws which gave concepts like mass, weight, force and pressure a systematic, mathematical foundation. In the pre-Newtonian world of Torricelli and Pascal, their thinking was based on the balancing of weights in the familiar sense of a shopkeeper’s scales. The weight of the mercury column in the barometer tube, which acted on the mercury in the bowl, was balanced by the weight of the air acting on the mercury in the bowl. Since the height of the mercury column was directly proportional to its weight, it was valid to use a length scale marked on the barometer tube to compute the weight of the air acting on the mercury in the bowl.

It is instructive to compare the language of Robert Boyle (1627-1691) and Isaac Newton (1643-1727) when discussing the barometer in the decades which followed. In the second edition of Boyle’s *New Experiments Physico-Mechanicall* of 1662 – which contains the first statement of Boyle’s Law – the word pressure appears frequently and has a meaning synonymous with weight. In Isaac Newton’s *Principia* of 1687, pressure is regarded as a manifestation of force. Boyle and Newton are thus speaking in essentially the same terms since according to Newtonian principles, weight is a force.

**– – – –**

**Newtonian principles applied**

The crucial advance in atmospheric science that Newton supplied in his *Principia* was the second law, which gave mathematical expression to force, and thus to weight and pressure, through the famous formula

The weight of a mercury column of cross-sectional area A and height h is

where ρ is the mass density of mercury and g is the acceleration due to gravity. The pressure exerted by the mercury column, which balances the atmospheric pressure, is

Thus P is directly proportional to h.

For a column of mercury 1 mm in height in a standard gravitational field (g = 9.80665 ms^{-2}) at 273K, P is equal to 133.322 pascals. This is a unit of pressure called the torr. Pascal and Torricelli are thus both commemorated in units of pressure.

**– – – –**

**A question of balance**

Torricelli, Pascal and Boyle were in agreement with the proposition that air has weight. According to Newton’s interpretation the atmosphere possesses mass which is subject to gravitational acceleration, resulting in a downward force. This raises the question – *Why doesn’t the sky fall down?*

Since the sky is observed to remain aloft, there must exist a counteracting upward force. The vital clue as to the nature of this force was obtained on Pascal’s behalf by Florin Périer on Puy de Dôme in 1648 – namely that pressure decreases with height in the atmosphere.

A difference in pressure produces a force. In this way a parcel of air in a vertical column of cross-sectional area A exerts a force in the opposite direction to the gravitational force, as shown in the diagram.

At equilibrium, the forces are equal. Thus

where ρ is the density of the air.

**– – – –**

**The decrease of temperature with altitude**

The appearance of snow above a certain height in elevated places provides plain evidence that temperature decreases with altitude, at least in that part of the atmosphere into which our earthly landscape protrudes. No doubt Torricelli, Pascal and other scientific philosophers of their time noticed this phenomenon and pondered upon it. But the explanation had to wait for another two centuries until the industrial revolution began, ushering in the age of steam and the associated science of thermodynamics.

The air in the troposphere, the lowest layer of the atmosphere where almost all weather phenomena occur, exhibits convection currents which continually transport air from lower regions to higher ones, and from higher regions to lower ones. When air rises it expands as the pressure decreases and so does work on the air around it. Thermodynamic principles dictate that this work requires the expenditure of heat, which has to come from within since air is a poor conductor and very little heat is transferred from the surroundings. As a result, rising air cools.

**– – – –**

Atmospheric convection processes fall within the province of the first law of thermodynamics, which can be expressed mathematically (see Appendix I) as

This equation states an energy conservation principle that applies to processes involving heat, work and internal energy. The atmospheric convection process is adiabatic meaning that no heat flows into or out of the system i.e. dQ = 0. Applying this constraint and using the combined gas equation to eliminate pressure p the above equation becomes

Integration yields

Converting from logarithms to numbers gives

Since by Mayer’s relation R = C_{P} – C_{V}

where γ = C_{P} / C_{V}. Using the combined gas equation to substitute V, the above equation can be rendered (with the help of ^{γ}√) as

Applying logarithmic differentiation gives

Assuming hydrostatic equilibrium, dp can be substituted giving

Since ρ = m/(RT/p) the above becomes

This adiabatic convection equation gives the rate at which the temperature of dry air falls with increasing altitude. Taking the following values: γ = 1.4 (dimensionless) ; R = 8.314 kgm^{2}s^{-2} K^{-1} mol^{-1} ; m = 0.0288 kg mol^{-1} ; g = 9.80665 ms^{-2} gives

At the top of Puy de Dôme (1465 meters), dry air will be 14°C cooler than at the base of the mountain. This explains why snow can appear on the summit while the grass is still growing on the lower slopes.

**– – – –**

**Appendix I**

In 1834, more than a century after Newton’s death, the French physicist and engineer Émile Clapeyron wrote a monograph entitled *Mémoire sur la Puissance Motrice de la Chaleur* (Memoir on the Motive Power of Heat). It contains the first appearance in print of the ideal gas equation, which combines the gas law of Boyle-Mariotte (PV)_{T} = k with that of Gay-Lussac (V/T)_{P} = k. Clapeyron wrote it in the form

where R is a constant and the sum of the terms in parentheses can be regarded as the thermodynamic temperature.

Sixteen years later in 1850, the German physicist Rudolf Clausius wrote a monograph on the same subject entitled *Ueber die bewegende Kraft der Wärme und die Gesetze, welche sich daraus fuer die Wärmelehre selbst ableiten lassen* (On the Motive Power of Heat, and on the Laws which can be deduced from it for the Theory of Heat). Seeking an analytical expression of the principle that a certain amount of work necessitates the expenditure of a proportional quantity of heat, he arrived at the following differential equation in the case of an ideal gas

where Q is the heat expended, U is an arbitrary function of temperature and volume, and A is the mechanical equivalent of heat. Earlier in his paper Clausius had represented Clapeyron’s combined statement of the laws of Boyle-Mariotte and Gay-Lussac as pv = R(a + t) so he recognized the right-hand term as corresponding to pdv, the external work done during the change

We know this equation today as an expression of the first law of thermodynamics, where U is the internal energy of the system under consideration.

U is a function of T and V so we may write the partial differential equation

Since U for an ideal gas is independent of volume and dU/dT is the heat capacity at constant volume C_{V}, the first law for an ideal gas takes on the form

**– – – –**

P Mander January 2018

]]>In 17th century France, dice games were a popular and fashionable habit. All kinds of people played at dice – soldiers, sailors, socialites, aristocrats, celebrities … and professional gamblers. Every era in history has featured this latter group, often colourful characters with sharp minds living off their wits.

Such an individual was Antoine Gombaud (1607-1684), who had adopted the high-flown title of *Chevalier de Méré* and was known as a flambuoyant big spender in gambling circles. He also had good connections with Renaissance intellectuals, and was himself a notable Salon theorist.

Professional gamblers have one overriding aim in life, which is to win. This is easier said than done however as gambling necessarily involves games of chance, and chance is a fickle creature. So the professional gambler has to proceed with caution until he finds a wager whose odds are in his favor. Then he brings big money to the table, plays long and hard, and walks away a richer man.

**Calculating success**

This was how Gombaud operated. But unlike most other gamblers, he did not rely solely on experience to show him which bets were favorable. He had an analytical turn of mind, and had started to work out on the basis of mathematical principle whether a certain game had favorable odds.

He first applied his thinking to a popular dice game in which players wagered on a six appearing in four throws of a die. He correctly reasoned that since each number on a six-sided die was equally likely to occur, the chance of getting a six on a single throw must be 1/6. He then considered the chance of getting a six if a die were thrown four times instead of once. He reasoned that the chance of success would be four times greater since each throw represented a separate opportunity for a six to occur, and he calculated that chance as 4 x 1/6 = 2/3. In other words, the odds of winning were favorable (i.e. >1/2).

Long before the law of large numbers was formulated, Gombaud seems to have intuitively understood that these favorable odds meant that although the outcome of an individual game could not be predicted, success would be assured if enough games were played.

Gombaud made piles of money out of this game, thereby cementing belief in his method of mathematical analysis as a means of identifying a winning bet. Buoyed by this success he extended the same reasoning to another game where he calculated that the odds of winning were favorable. But an unpleasant surprise was in store.

**Unexpected losses**

Gombaud’s new focus of attention was a dice game in which players wagered on getting a double six in twenty four throws of two dice. He correctly reasoned that the chance of getting a double six in a single throw of two dice was 1/36. Then applying his formula he calculated the chance of success as 24 x 1/36 = 2/3, the same favorable odds as in the previous game.

Emboldened by this analysis, Gombaud brought a stack of money to the dice table to wager on getting double sixes. But his expectations did not materialise; in fact the more he played this game, the more his losses mounted. Gombaud simply could not understand it. Both games had exactly the same favorable odds. So why did he win at one and lose at the other?

Desperate for an explanation of his losses, he wrote in 1654 to one the foremost thinkers of his time, Blaise Pascal (1623-1662), who in turn shared the news of *“De Méré’s paradox”* with the profoundly talented amateur mathematician Pierre Fermat (1607-1665). The two began a legendary correspondence, out of which the theory of probability was born and the paradox was solved.

**Fallacious reasoning**

The work of Pascal and Fermat revealed Gombaud’s mistake in thinking that the chances of success with n throws could be calculated by multiplying the chance of success for a single throw by n.

Taking the first game as an example, Gombaud thought that after two throws of the die the chance of success doubled from 1/6 to 1/3. But a chart of all 36 possible outcomes shows that the total number of favorable outcomes (shown in gold) is not 12, but 11.

Pascal and Fermat no doubt saw that the ratio of favorable outcomes to total outcomes after two throws could be written as

and after three throws as

and so on. Since 5/6 was the chance of failure in a single throw, and the exponent was the number of throws, the formula for the chance of getting *at least one success in n throws* could be generalised as

where q is the chance of failure in a single throw. This was the correct formula for computing whether the odds were favorable or not. Contrast this with the formula Gombaud used

It is instructive to compare Gombaud’s incorrect formula with the correct one for the first game

and the second game

The correct figures in the final column make it clear why Gombaud won at the first game and lost at the second. They also show what a knife-edge situation it was. If the second game had been played with just one more throw (n=25, 1-q^n = 0.506) Gombaud would have won both games. There would have been no paradox to explain, and the genius minds of Pascal and Fermat might never have been applied to founding probability theory!

**– – – –**

**Probability profiles**

**Game 1**

**Game 2**

**– – – –**

P Mander November 2017

]]>As shown in previous posts on the CarnotCycle blog, it is possible to compute dew point temperature and absolute humidity (defined as water vapor density in g/m^3) from ambient temperature and relative humidity. This adds value to the output of RH&T sensors like the DHT22 pictured above, and extends the range of useful parameters that can be displayed or toggled on temperature-humidity gauges employing these sensors.

Meteorological opinion* suggests that dew point temperature is a more dependable parameter than relative humidity for assessing climate comfort especially during summer, while absolute humidity quantifies water vapor in terms of mass per unit volume. In effect this added parameter turns an ordinary temperature-humidity gauge into a gas analyzer.

*https://www.weather.gov/arx/why_dewpoint_vs_humidity

**– – – –**

**Hardware**

I used an Arduino Uno microprocessor and a wired DHT22 sensor with data output to a 16×2 liquid crystal display. Circuit components are uncomplicated: a 10 kΩ potentiometer, 220 Ω resistor and a few jumper and breadboard wires are all that is needed, power supplied by a 9V battery* after programming via USB.

*http://www.instructables.com/id/Powering-Arduino-with-a-Battery/

**– – – –**

**Circuitry**

I wired the LCD as per guidance on the Arduino website. The pot controls contrast on the LCD. The DHT22 was wired to take 5V from the breadboard power rail with sensor data routed to digital pin 7. The sensor version that I used (Adafruit AM2302) has a built-in 5.1 kΩ pull-up resistor.

**– – – –**

**Code**

The DHT22 has a sampling rate of 0.5 Hz which some regard as a weakness, but in the context of a temperature-humidity gauge the criticism is rather academic since it would serve no purpose to output data to the LCD at such a rapid rate. I set the display refresh to 30 seconds. Note the built-in option to display ambient temperature and dew point temperature in Celsius or Fahrenheit.

**– – – –**

**Experiment**

I used the unit to investigate the change in temperature and humidity parameters in a bathroom (enclosed volume 11.6 m^3) before and after operating the shower at a temperature of 40°C for about 5 minutes. The sensor was placed 60 cm above floor level at the midpoint of the room.

Here is the data display before the shower

and after the shower

The displayed data shows that bathroom temperature stayed constant during the experiment while the relative humidity increased markedly. This result could have been obtained with an ordinary temperature-humidity gauge, but the smart gauge gives additional information.

In contrast to the steady ambient temperature, the dewpoint temperature shows a sharp rise from a comfortable 11.8°C (53°F) to a humid 18.3°C (65°F). The absolute humidity data shows an even greater increase – a 50% hike in water vapor concentration from 10 to 15 grams per m^3 in a matter of minutes.

**– – – –**

© P Mander, June 2018

]]>A couple of blocks down from the Metro station *Jussieu* in Paris’s 5th arrondisement lies Rue Cuvier, which runs along the north-western edge of the botanic gardens which houses the Natural History Museum. The other side of the road is bordered by various institutes of the Sorbonne, notably UPMC (formerly Pierre and Marie Curie University).

The Curies have historical associations with a number of streets in the Latin Quarter, and Rue Cuvier in particular. Pierre Curie was born at No.16 and it was in a science faculty building in this street that the Curies conducted their fundamental research on radium between 1903 and 1914. The building still exists, shielded from public curiosity by a set of prison-style metal gates, and it was in this laboratory that the first pioneering research into what would later be recognized as nuclear energy was conducted in 1903.

Yet it was not the renowned husband-and-wife team which carried out this experiment. It was in fact Pierre Curie and his young graduate assistent Albert Laborde who did the work and reported it in Comptes Rendus in a note entitled *Sur la chaleur dégagée spontanément par les sels de radium* (On the spontaneous production of heat by radium salts). The note, which barely covers two pages, was published in March 1903.

The laboratory in Rue Cuvier where the Curies and Laborde worked was at No.12. Just across the street is No.57, which once housed the Appled Physics laboratory of the Natural History Museum. It was here in 1896 that Henri Becquerel serendipitously discovered the strange phenomenon of radioactivity.

Between that moment of discovery on one side of Rue Cuvier and Curie and Laborde’s remarkable experiment on the other, lay the years of backbreaking work in a shed in nearby Rue Vauquelin where the Curies, together with chemist Gustave Bémont, processed tons of waste from an Austrian uranium mine in order to extract a fraction of a gram* of the mysterious new element radium.

*the maximum amount of radium coexisting with uranium is in the ratio of their half-lives. This means that uranium ores can contain no more than 1 atom of radium for every 2.8 million atoms of uranium.

**– – – –**

**The Curie – Laborde experiment**

Pierre Curie and Albert Laborde were the first to make an experimental determination of the heat produced by radium because they were the first to have enough radium-enriched material to make the experiment practicable. It was a close-run thing though. Ernest Rutherford and Frederick Soddy had been busy working on radioactivity at McGill University in Canada since 1900, but they were hampered by lack of access to radium and were using much weaker thorium preparations. This situation would quickly change however when concentrated radium samples became available from Friedrich Giesel in Germany. By the summer of 1903, Soddy (now at University College London) and Rutherford would have their hands on Giesel’s supply. But Curie and Laborde had a head start, and they turned their narrow time advantage to good account.

**Methodology**

To determine the heat produced by their radium preparation, they used two different approaches – a thermoelectric method, and an ice calorimeter method.

This diagram of their thermoelectric device, taken from Mme Curie’s *Traité de Radioactivité (1910), Tome II, p272*, unfortunately lacks an explanation of the key, but the set-up essentially comprises a test ampoule containing the chloride salt of radium-enriched barium and a control ampoule of pure barium chloride. These are marked A and A’. The ampoules are placed in the cavities of brass blocks enclosed in inverted Dewar flasks D, D’ with some unstated packing material to keep the ampoules from falling down. The flasks are enclosed in containers immersed in a further medium-filled container E supported in a space enclosed by a medium F, all of which was presumably designed to ensure a constant temperature environment. The key feature is C and C’ which are iron-constantan thermocouples, embedded in the brass cavities, with their associated circuitry.

The current produced by the Seebeck effect resulting from the temperature difference between C and C’ was measured by a galvanometer. The radium ampoule was then replaced by an ampoule containing a platinum filament through which was passed a current whose heating effect was sufficient to obtain the same temperature difference. The equivalent rate of heat production by the radium ampoule could then be calulated using Joule’s law.

The second method used was a Bunsen calorimeter, which was known to be capable of very exact measurements using only a small quantity of the test substance. For details of the operational principleof this calorimeter, the reader is referred to this link:

http://thewaythetruthandthelife.net/index/2_background/2-1_cosmological/physics/j9.htm

The above diagram of the Bunsen calorimeter is taken from Mme Curie’s *Traité de Radioactivité (1910), Tome II, p273*.

**Results**

For most of their experiments, Curie and Laborde used 1 gram of a radium-enriched barium chloride preparation, which liberated approximately 14 calories (59 joules) of heat per hour. It was estimated from radioactivity measurements – no doubt using the quartz electrometer instrumentation invented by Curie – that the gram of test substance contained about one sixth of a gram of radium.

Measurements were also made on a 0.08 gram sample of pure radium chloride. These yielded results of the same order of magnitude without being absolutely in agreement. Curie and Laborde made it clear in their note that these were pathfinding experiments and that their aim was solely to demonstrate the fact of continuous, spontaneous emission of heat by radium and to give an approximate magnitude for the phenomenon. They stated:

*» 1 g of radium emits a quantity of heat of approximately 100 calories (420 joules) per hour.*

In other words, a gram of radium emitted enough heat in an hour to raise the temperature of an equal weight of water from freezing point to boiling point. And it was continuous emission, hour after hour for year after year, without any detectable change in the source material.

Curie and Laborde had quantified the capacity of radium to generate heat on a scale which was far beyond that known for any chemical reaction. And this heat was continuously produced at a constant rate, unaffected by temperature, pressure, light, magnetism, electricity or any other agency under human control.

The scientific world was astonished. This phenomenon seemed to defy the laws of thermodynamics and the question was immediately raised: Where was all this energy coming from?

**Speculation and insight**

In 1903, little was known about the radiation emitted by radioactive substances and even less about the atoms emitting them. The air-ionizing emissions had been grouped into three categories according to their penetrating abililities and deflection by a magnetic field, but the nature of the atom – with its nucleus and orbiting electrons – was a mystery yet to be unveiled.

Radioactivity had been discovered by Henri Becquerel as an accidental by-product of his main area of interest, optical luminescence – which is the emission of light of certain wavelengths following the absorption of light of other wavelengths. By association luminescence was seen as a possible explanation of radioactivity, that radioactive substances might be absorbing invisible cosmic energy and re-emitting it as ionizing radiation. But no progress was made on identifying a cosmic source.

Meanwhile, from her detailed analytical work that she began in 1898, Marie Curie had discovered that uranium’s radioactivity was independent of its physical state or its chemical combinations. She reasoned that radioactivity must be an atomic property. This was a crucial insight, which directed thinking towards the idea of conversion of mass into energy as an explanation of the continuous and prodigious production of heat by radium that Pierre Curie and Albert Laborde had observed.

One of the major theories in physics at this time was electromagnetic theory. Maxwell’s equations predicted that mass and energy should be mathematically related to each other, and it was by following this line of thought that Frederick Soddy, previously Ernest Rutherford’s collaborator in Canada, came to the conclusion that radium’s energy was obtained at the expense of its mass.

Writing in the very first Annual Report on the Progress of Chemistry, published by the Royal Society of Chemistry in 1904, Soddy said this:

*” … the products of the disintegration of radium must possess a total mass less than that originally possessed by the radium, and a part of the energy evolved must be considered as being derived from the change of a part of the mass into energy.”*

**– – – –**

**A different starting point**

While Pierre Curie and Albert Laborde were conducting their radium experiment in Rue Cuvier, Paris, Albert Einstein – a naturalized Swiss citizen who had recently completed his technical high school studies in Zurich – was working as a clerk at the Patent Office in Bern. Much of his work related to questions about signal transmission and time synchronization, and this may have influenced his own thoughts, since both of these issues feature prominently in the conceptual thinking that led Einstein to his theory of special relativity submitted in a paper entitled *Zur Elektrodynamik bewegter Körper* (On the electrodynamics of moving bodies) to *Annalen der Physik* on Friday 30th June 1905.

On the basis of electromagnetic theory, supplemented by the principle of relativity (in the restricted sense) and the principle of the constancy of the velocity of light contained in Maxwell’s equations, Einstein proves Doppler’s principle by demonstrating the following:

*Ist ein Beobachter relativ zu einer unendlich fernen Lichtquelle von der Frequenz ν mit der Geschwindigkeit v derart bewegt, daß die Verbindungslinie “Lichtquelle-Beobachter” mit der auf ein relativ zur Lichtquelle ruhendes Koordinatensystem bezogenen Geschwindigkeit des Beobachters den Winkel φ bildet, so ist die von dem Beobachter wahrgenommene Frequenz ν’ des Lichtes durch die Gleichung gegeben:*

*If an observer is moving with velocity v relatively to an infinitely distant light source of frequency ν, in such a way that the connecting “source-observer” line makes the angle φ with the velocity of the observer referred to a system of co-ordinates which is at rest relatively to the source of light, the frequency ν’ of the light perceived by the observer is given by:*

where Einstein uses V (not c) to represent the velocity of light. He then finds that both the frequency and energy (E) of a light packet (cf. E=hν) vary with the velocity of the observer in accordance with the same law:

It was to this equation Einstein returned in a paper entitled *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* (Does the inertia of a Body depend on its Energy Content?) submitted to *Annalen der Physik* on Wednesday 27th September 1905.

**– – – –**

**Mass-energy equivalence**

Einstein’s paper of September 1905 – the last of the famous set published in *Annalen der Physik* in that memorable year – is less than three pages long and constitutes little more than a footnote to the preceding 30-page relativity paper. Yet despite its brevity, it is a difficult and troublesome work over which Einstein brooded for some years.

The paper describes a thought experiment in which a body sends out a light packet in one direction, and simultaneously another light packet of equal energy in the opposite direction. The energy of the body before and after the light emission is determined in relation to two systems of co-ordinates, one at rest relative to the body (where the before-and-after energies are E_{0} and E_{1}) and one in uniform parallel translation at velocity v (where the before-and-after energies are H_{0} and H_{1}).

Einstein applies the law of conservation of energy, the principle of relativity and the above-mentioned energy equation to arrive at the following result for the rest frame and the frame in motion relative to the body, the light energy being represented by a capital L:

At this point, things start getting a little tricky. Einstein subtracts the rest frame energies from the moving frame energies for both the before-emission and after-emission cases, and then subtracts these differences:

These differences represent the before-emission kinetic energy (K_{0}) and after-emission kinetic energy (K_{1}) with respect to the moving frame

Since the right hand side is a positive quantity, the kinetic energy of the body diminishes as a result of the emission of light, even though its velocity v remains constant. To elucidate, Einstein performs a binomial expansion on the first term in the braces, although he makes no mention of the procedure; nor does he show the math. So this next bit is my own contribution:

Let (v/V)^{2} = x

The appropriate form of the binomial expansion is

Setting x = v^{2}/V^{2} and n = ½

The contents of the braces in the kinetic energy expression thus become

Now back to Einstein. At this point he introduces a new condition into the scheme of things, namely that the velocity v of the system of co-ordinates moving with respect to the body is much less than the velocity of light V. We are in the classical world of v<<V, and so Einstein allows himself to neglect magnitudes of fourth and higher orders in the above expansion. Hence he arrives at

This equation gives the amount of kinetic energy lost by the body after emitting a quantity L of light energy. In the classical world of v<<V the kinetic energy of the body is also given by ½mv^{2}, and since the velocity v is the same before and after the light emission, Einstein is led to identify the loss of kinetic energy in his thought experiment with a loss of mass:

*Gibt ein Körper die Energie L in Form von Strahlung ab, so verkleinert sich seine Masse um L/V ^{2}. Hierbei ist es offenbar unwesentlich, daß die dem Körper entzogene Energie gerade in Energie der Strahlung übergeht, so daß wir zu der allgemeineren Folgerung geführt werden: Die Masse eines Körpers ist ein Maß für dessen Energie-inhalt.*

*If a body gives off the energy L in the form of radiation, its mass diminishes by L/V ^{2}. The fact that the energy withdrawn from the body becomes energy of radiation evidently makes no difference, so that we are led to the more general conclusion that: The mass of a body is a measure of its energy content.*

**– – – –**

**Testing the theory**

When Einstein wrote *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* in 1905, he was certainly aware of the phenomenon of continuous heat emission by radium salts as measured by Curie and Laborde, and confirmed by several others in 1903 and 1904. In fact he saw in this a possible means of putting relativity theory to the test:

*Es ist nicht ausgeschlossen, daß bie Körpern, deren Energieinhalt in hohem Maße veränderlich ist (z. B. bei den Radiumsaltzen) eine Prüfung der Theorie gelingen wird.*

*It is not impossible that with bodies whose energy content is variable to a high degree (e.g. with radium salts) the theory may be successfully put to the test.*

In hindsight, it was unlikely that Einstein could have made this test work and he soon abandoned the idea. Not only would the mass difference have been extremely small, but also the process of nuclear decay was conceptually different to Einstein’s thought experiment. In Curie and Laborde’s calorimeter, the energy emitted by the body (radium nucleus) was not initially in the form of radiant energy; it was in the form of kinetic energy carried by an ejected alpha particle (helium nucleus) and a recoiling radon nucleus.

But Einstein had a knack of getting ahead of himself and ending up in the right place. The mass-energy equivalence relation he obtained from his imagined light-emitting body turned out to be valid also in relation to the kinetic energy of radioactive decay particles.

To see this in relation to Curie and Laborde’s experiment, consider the nuclear reaction equation

Here Q is the mass difference in atomic mass units (u) required to balance the equation:

Mass of Ra = 226.02536 u

Mass of Rn (222.01753) + He (4.00260) = 226.02013 u

Mass difference = Q = 0.00523 u

The kinetic energy equivalent of 1 u is 931.5 MeV

So Q = 4.87 MeV

The kinetic energy is shared by the ejected alpha particle and recoiling radon nucleus. Since the velocities are non-relativistic, this can be calculated on the basis of the momentum conservation law and the classical expression for kinetic energy. Given the masses of the Rn and He nuclei, their respective velocities must be in the ratio 4.00260 to 222.01753. Writing the kinetic energy expression as ½mv.v and recognizing that ½mv has the same magnitude for both nuclei, the kinetic energies of the Rn and He nuclei must also be in the ratio 4.00260 to 222.01753. The kinetic energy carried by the alpha particle is therefore

4.87 x 222.01753/226.02013 = 4.78 MeV

This result has been confirmed by experiment.

**– – – –**

**Links to original papers mentioned in this post**

Sur la chaleur dégagée spontanément par les sels de radium ; par MM. P. Curie et A. Laborde

Comptes Rendus, Tome 136, janvier – juin 1903

http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-3091&I=673&M=tdm

Zur Elektrodynamik bewegter Körper; von A. Einstein

Annalen der Physik 17 (1905) 891-921

https://archive.org/stream/annalenderphysi108unkngoog#page/n1020/mode/2up

Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig? von A. Einstein

Annalen der Physik 18 (1905) 639-641

https://archive.org/stream/annalenderphysi143unkngoog#page/n707/mode/2up

**– – – –**

**Postscript**

In *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* Einstein arrived at a general statement on the dependence of inertia on energy (Δm = ΔE/V^{2}, in today’s language E = mc^{2}) from the consideration of a special case. He was deeply uncertain about this result, and returned to it in two further papers in 1906 and 1907, concluding that a general solution was not possible at that time. He had to wait a few years to discover he was right. I include links to these papers for the sake of completeness.

Das Prinzip von der Erhaltung der Schwerpunktsbewegung und die Trägheit der Energie; von A. Einstein

Annalen der Physik 20 (1906) 627-633

http://myweb.rz.uni-augsburg.de/~eckern/adp/history/einstein-papers/1906_20_627-633.pdf

Über die vom Relativitätsprinzip geforderte Trägheit der Energie; von A. Einstein

Annalen der Physik 23 (1907) 371-384

http://myweb.rz.uni-augsburg.de/~eckern/adp/history/einstein-papers/1907_23_371-384.pdf

**– – – –**

P Mander June 2017

]]>Having read biographies of the Curies, Marie Curie’s doctoral thesis, and a number of scholarly articles about the radium phenomenon, I have come to the conclusion that the Marie Curie legend in popular culture tends to sideline her scientific achievements, focusing more on her imagined saintliness and perceived role as bringer of medical marvels than on her pioneering work as a physical chemist, in which her husband Pierre (above left) played an important facilitating role.

To my mind, the popular press of her day was largely responsible for the misconstruction of the Marie Curie legend, filling the public’s mind with the discovery of a miracle cure for cancer brought about by what it portrayed as an angelic young foreign-born mother slaving away in a dark shed in Paris for no wages.

The media frenzy around radium had consequences. Once radium production was established on a commercial scale, ignorant and unscrupulous marketers quickly morphed the Curies’ discovery of an element that glowed in the dark into revitalizing radium baths, radium drinking water, radium chocolate, radium toothpaste, radium cigarettes and even radium suppositories for restoring male potency while eradicating hemorrhoids:

The dreadful damage these products must have caused doesn’t bear thinking about. Sadly, the Curies themselves seemed carried along similarly radium-dazzled tracks. They failed to connect Pierre’s rapidly deteriorating health with exposure to radioactive emissions, while stoically accepting the painful damage to Marie’s hands as a price worth paying for the greater good they somehow imagined radium to represent.

The sensationalist aspects of the Curie legend, while an education in themselves, are however not the subject of this post. Physics and chemistry are the subjects here. When you look at Marie Curie as a physical chemist, and examine her contributions to the science of natural radioactivity, it is clear how crucial a role was played by the miracle machine designed and developed by Pierre Curie.

**– – – –**

**The Curie quartz electrometer apparatus**

The above diagram is taken from *Méthodes de Mesure employées en Radioactivité* published in Paris in 1911 by Albert Laborde, a graduate engineer who became Pierre Curie’s assistant in 1902. It shows the quartz electrometer apparatus developed by Pierre Curie and his brother Jacques for the precise measurement of very weak currents (of the order of tenths of picoamperes) following their discovery of piezoelectricity in 1880. It was this discovery that prompted the brothers Curie to build a calibrated electrostatic charge generator using a thin quartz lamella (center) to compensate and thus measure the leakage current from a charged capacitor (left) using a quadrant electrometer (right).

This apparatus was later adapted by Pierre Curie to allow accurate quantification of the tiny leakage currents produced in an ionization chamber by samples of radioactive material.

This is the experimental set-up that Marie Curie can be seen using in the header photograph, which dates from 1898.

**– – – –**

**Amazing coincidences**

When you consider the train of coincidences that led to Marie Curie’s choice of subject for her doctorate (*Recherches sur les substances radioactives*) it is nothing short of amazing. At the time she was looking around for suitable topic, five years after having journeyed from Poland to Paris to enroll as a Sorbonne student, Henri Becquerel had just accidentally discovered mysterious rays emanating from uranium which had the property of weakly ionizing air. This was in 1896. Just a year previously, Marie had married Pierre Curie who happened to possess the one instrument capable of accurately measuring small ionization currents, following his discovery of piezoelectricity sixteen years earlier.

Because uranic rays were a new phenomenon, Marie was saved the task of first researching the topic which otherwise would have entailed reading a lot of academic papers in unfamiliar French. This saving of time and effort attracted her to choose to study Becquerel’s uranic rays, something she admitted in later life. Furthermore she had no competition since Becquerel had shown little interest in pursuing his original finding – the big news at the time was X rays, discovered by Wilhelm Röntgen in 1895. No fewer than 1044 papers on X rays were published in 1896, when Becquerel first announced his discovery. Not surprisingly, nobody took any notice. Marie Curie had the field to herself.

**– – – –**

**The weight, the watch and the light spot**

This magnified portion of the header image shows Marie Curie as she sits at the quartz electrometer apparatus. Her right hand can be seen holding an analytical balance weight in a controlled manner while in her left hand the edge of a stopwatch can be seen. Her eyes are looking fixedly at a horizontal measuring scale above a light source (square hole) mounted on a wooden pedestal.

The light source is shining a beam onto the mirror of a quadrant electrometer out of view to the left. The light spot is reflected onto the horizontal scale (cf. diagram above) and Marie is endeavoring to keep the light spot stationary. She does this by gradually releasing the weight which is attached to the quartz lamella, thereby generating charge to compensate the ionization current produced by the radioactive sample in the ionization chamber also out of view to the left. The entire process of weight release is timed by a stopwatch. Once the weight is fully released the watch is stopped. The weight generates a specific amount of charge Q* on the quartz lamella during the measured time T. Hence Q/T is equal to the ionization current, which is directly proportional to the intensity of the ionizing radiation emitted by the sample, or to use the term Marie Curie coined, its radioactivity.

**The amount of charge Q is calculated from Q = W × K × L/B where W is the applied weight, K is the quartz specific constant, L is the lamella length and B is the lamella thickness.*

**– – – –**

**Thesis**

On Thursday 25th June 1903, at La Faculté des Sciences de Paris, Marie Curie presented her doctoral thesis to the examination committee, two of whose members were later to become Nobel laureates. The committee was impressed; in fact it expressed the view that her findings represented the greatest scientific contribution ever made in a doctoral thesis.

At the outset, Curie coined a new term – radioactivity – to describe the ionizing radiation emitted by the uranium compounds studied by Henri Becquerel. She announced her discovery that the element thorium also displays radioactivity. And she presented a method, using the quartz electrometer apparatus developed by Jacques and Pierre Curie, by which the intensity of radioactive emissions could be precisely quantified and expressed as ionization currents. This was a game-changing advance on the essentially qualitative methods that had been used hitherto e.g. electroscopes and photographic plates.

As one would expect, Curie began her experimental work with a systematic study of uranium and its compounds, measuring and tabulating their ionization currents. There was a considerable range from the largest to the smallest currents, and within the limits of experimental error it was evident that the ionization currents were proportional to the amounts of uranium present in the sample. The same was true for thorium.

From the chemist’s perspective this was a puzzling result. The properties of chemical compounds of the same element generally depend on what it is compounded with and the arrangement of atoms in the molecule. Yet here was a very different finding – the radioactivity Curie measured was independent of compounding or molecular structure.

Curie drew the conclusion that radioactivity was a property of the atom – *une propriété atomique* she called it. She wasn’t referring to the uranium atom or the thorium atom, but to the atom as a generalized material unit with an implied interior from which radioactive emissions issued. That is a profound conception, with which Marie Curie made a significant contribution to the advancement of physics.

And at this point in her thesis she hadn’t even mentioned radium.

**– – – –**

**New elements**

For the next part of her thesis, Marie Curie turned her attention to the study of uranium-containing minerals, one of which was the mineral pictured above. Today we call it uraninite but in Curie’s day it was called pitchblende. The sample she obtained was from a uranium mine near the town of Joachimsthal in Austria, now Jáchymov in the Czech Republic. She measured its ionization current and found it to be considerably higher than its uranium content warranted. If her radioactivity hypothesis was correct, there was only one explanation: pitchblende contained atoms that emitted much more intense radiation than uranium atoms, which meant that another radioactive substance must be present in the ore. Curie now had the task of finding it, and was joined in this quest by her husband Pierre and the chemist Gustave Bémont.

The quartz electrometer demonstrated its value yet again, since the various fractions derived from the pitchblende sample during chemical analysis could be tested for radioactivity. In this way, the radioactivity was followed to two fractions: one containing the post-transition metal bismuth* and another containing the alkaline earth metal element barium. The Curies announced their findings in July 1898, stating their belief that these fractions contained two previously unknown metal elements, and suggesting that if the existence of these metals were confirmed, the bismuth-like element should be called polonium and the barium-like element radium.

**unknown to the Curies, the uranium decay series actually produces two radioisotopes of bismuth along with the isotopes of polonium, so the presence of radioactivity in this fraction did not solely indicate the presence of a new element.*

**– – – –**

**The shed**

The heroic work which made Marie Curie a legend took place in a shed at the back of a Grande École* in Rue Vauquelin. In order to produce sufficient quantities to isolate the new elements and determine their atomic weights, tons of pitchblende were needed. This was because the maximum amounts of radium and polonium that can coexist in secular equilibrium with uranium are in the ratios of their respective half lives. This fact and the limited human resources available rendered any attempt to isolate polonium impossible, and the situation with radium was not much better. At best, a quantity of uranium ore containing 3 metric tons of elemental uranium is needed to extract 1 gram of radium at a yield of 100%. In the primitive conditions of the shed, obtaining a gram of radium meant processing 8 or 9 tons of uranium ore. One can only wonder at how Marie Curie found the physical and mental strength for such an arduous task.

*At the time, it was called *École supérieure de physique et de chimie industrielles de la ville de Paris*. Today it is called *ESCPI Paris*.

**– – – –**

**Did she deserve two Nobel Prizes?**

Marie Curie was awarded a quarter of the Nobel Prize for Physics in 1903 for her work on radioactivity. In 1911 she was the sole recipient of the Nobel Prize for Chemistry, awarded for the discovery of radium and polonium.

There can be no doubt about her credentials for the 1903 award, but some biographers have questioned whether the 1911 Prize was deserved, claiming that the discoveries of radium and polonium were part of the reason for the first prize.

As described in this post, the experimental evidence which Marie Curie set forth to reason that radioactivity is an atomic property was based solely on her experiments with uranium and thorium. Neither radium nor polonium had anything to do with it. On these grounds the claims of those biographers can be rejected.

Which leaves the question of under what circumstances the discovery of a new element qualifies for a Nobel Prize in chemistry. Clearly the discovery of a naturally radioactive element is not sufficient, otherwise Marguerite Perey – who worked as Marie Curie’s lab assistant and discovered francium in 1939 – would have qualified. Other aspects of the discovery need to be taken into account, and in 1911 there were many such aspects to Marie Curie’s discovery of radium and polonium, and the isolation of radium.

Reading the award citation, what comes across to me – albeit between the lines – is a recognition of the monumental personal effort and dedication involved in the discovery and characterization of these remarkable elements that led to the modern science of nuclear physics.

**– – – –**

P Mander June 2017

]]>A thermodynamic system doesn’t have to be big. Although thermodynamics was originally concerned with very large objects like steam engines for pumping out coal mines, thermodynamic thinking can equally well be applied to very small systems consisting of say, just a few atoms.

Of course, we know that very small systems play by different rules – namely quantum rules – but that’s ok. The rules are known and can be applied. So let’s imagine that our thermodynamic system is an idealized solid consisting of three atoms, each distinguishable from the others by its unique position in space, and each able to perform simple harmonic oscillations independently of the others. At the absolute zero of temperature, the system will have no thermal energy, one microstate and zero entropy, with each atom in its vibrational ground state.

Harmonic motion is quantized, such that if the energy of the ground state is taken as zero and the energy of the first excited state as ε, then 2ε is the energy of the second excited state, 3ε is the energy of the third excited state, and so on. Suppose that from its thermal surroundings our 3-atom system absorbs one unit of energy ε, sufficient to set one of the atoms oscillating. Clearly, one unit of energy can be distributed among three atoms in three different ways – 100, 010, 001 – or in more compact notation [100|3].

Now let’s consider 2ε of absorbed energy. Our system can do this in two ways, either by promoting one oscillator to its second excited state, or two oscillators to their first excited state. Each of these energy distributions can be achieved in three ways, which we can write [200|3], [110|3]. For 3ε of absorbed energy, there are three distributions: [300|3], [210|6], [111|1].

Summarizing the above information

Energy E (in units of ε) | Total microstates W | Ratio of successive W’s |

0 | 1 | |

1 | 3 | 3 |

2 | 6 | 2 |

3 | 10 | 1⅔ |

The summary shows that as E increases, so does W. This is to be expected, since as W increases, the entropy S (= k log W) increases. In other words E and S increase or decrease together; the ratio ∂E/∂S is always positive. Since ∂E/∂S = T, the finding that E and S increase or decrease together is equivalent to saying that the absolute temperature of the system is always positive.

**– – – –**

**Adding an extra particle**

It is instructive to compare the distribution of energy among three oscillators (N =3)*

E = 0: [000|1]

E = 1: [100|3]

E = 2: [200|3], [110|3]

E = 3: [300|3], [210|6], [111|1]

with the distribution among four oscillators (N = 4)*

E = 0: [0000|1]

E = 1: [1000|4]

E = 2: [2000|4], [1100|6]

E = 3: [3000|4], [2100|12], [1110|4]

*For any single distribution among N oscillators where n_{0}, n_{1},n_{2} … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

It is understood that 0! = 1. Derivation of the formula is given in Appendix I.

For both the 3-oscillator and 4-oscillator systems, the first excited state is never less populated than the second, and the second excited state is never less populated than the third. Population is graded downward and the ratios n_{1}/n_{0} > n_{2}/n_{1} > n_{3}/n_{2} are less than unity.

Example calculations for N = 4, E = 3:

Comparisons can also be made of a single ratio across distributions and between systems. For example the values of n_{1}/n_{0} for E = 0, 1, 2, 3 are

(N = 4) : 0, ⅓, ½, ⅗

(N = 3) : 0, ½, ⅔, ¾

Since for a macroscopic system

this implies that for a given value of E the 4-oscillator system is colder than the 3-oscillator system. The same conclusion can be reached by looking at the ratio of successive W’s for the 4-oscillator system sharing 0 to 3 units of thermal energy

Energy E (in units of ε) | Total microstates W | Ratio of successive W’s |

0 | 1 | |

1 | 4 | 4 |

2 | 10 | 2½ |

3 | 20 | 2 |

For the 4-oscillator system the ratios of successive W’s are larger than the corresponding ratios for the 3-oscillator system. The logarithms of these ratios are inversely proportional to the absolute temperature, so the larger the ratio the lower the temperature.

**– – – –**

**Finite differences**

The differences between successive W’s for a 4-oscillator system are the values for a 3-oscillator system

W for (N =4) : 1, 4, 10, 20

Differences : 3, 6, 10

Likewise the differences between successive W’s for a 3-oscillator system and a 2-oscillator system

W for (N =3) : 1, 3, 6, 10

Differences : 2, 3, 4

Likewise for the differences between successive W’s for a 2-oscillator system and a 1-oscillator system

W for (N =2) : 1, 2, 3, 4

Differences : 1, 1, 1

This implies that W for the 4-particle system can be expressed as a cubic in n, and that W for the 3-particle system can be expressed as a quadratic in n etc. Evaluation of coefficients leads to the following formula progression

For N = 1

For N = 2

For N = 3

For N = 4

It appears that in general

Since n = E/ε and ε = hν, the above equation can be written

For a system of oscillators this formula describes the functional dependence of W microstates on the size of the particle ensemble (N), its energy (E), the mechanical frequency of its oscillators (ν) and Planck’s constant (h).

**– – – –**

**Appendix I**

**Formula to be derived**

For any single distribution among N oscillators where n_{0}, n_{1},n_{2} … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

**Derivation**

In combinatorial analysis, the above comes into the category of permutations of sets with the possible occurrence of indistinguishable elements.

Consider the distribution of 3 units of energy across 4 oscillators such that one oscillator has two units, another has the remaining one unit, and the other two oscillators are in the ground state: {2100}

If each of the four numbers was distinct, there would be 4! possible ways to arrange them. But the two zeros are indistinguishable, so the number of ways is reduced by a factor of 2! The number of ways to arrange {2100} is therefore 4!/2! = 12.

1 and 2 occur only once in the above set, and the occurrence of 3 is zero. This does not result in a reduction in the number of possible ways to arrange {2100} since 1! = 1 and 0! = 1. Their presence in the denominator will have no effect, but for completeness we can write

4!/2!1!1!0!

to compute the number of microstates for the single distribution E = 3, N = 4, {2100} where n_{0} = 2, n_{1} = 1, n_{2} = 1 and n_{3} = 0.

In the general case, the formula for the number of microstates for a single energy distribution of E among N oscillators is

where the terms in the denominator are as defined above.

**– – – –**

P Mander April 2016

]]>The Arrhenius equation explains why chemical reactions generally go much faster when you heat them up. The equation was actually first given by the Dutch physical chemist JH van ‘t Hoff in 1884, but it was the Swedish physical chemist Svante Arrhenius (pictured above) who in 1889 interpreted the equation in terms of activation energy, thereby opening up an important new dimension to the study of reaction rates.

**– – – –**

**Temperature and reaction rate**

The systematic study of chemical kinetics can be said to have begun in 1850 with Ludwig Wilhelmy’s pioneering work on the kinetics of sucrose inversion. Right from the start, it was realized that reaction rates showed an appreciable dependence on temperature, but it took four decades before real progress was made towards quantitative understanding of the phenomenon.

In 1889, Arrhenius penned a classic paper in which he considered eight sets of published data on the effect of temperature on reaction rates. In each case he showed that the rate constant could be represented as an explicit function of the absolute temperature:

where both A and C are constants for the particular reaction taking place at temperature T. In his paper, Arrhenius listed the eight sets of published data together with the equations put forward by their respective authors to express the temperature dependence of the rate constant. In one case, the equation – stated in logarithmic form – was identical to that proposed by Arrhenius

where T is the absolute temperature and a and b are constants. This equation was published five years before Arrhenius’ paper in a book entitled *Études de Dynamique Chimique*. The author was J. H. van ‘t Hoff.

**– – – –**

**Dynamic equilibrium**

In the *Études* of 1884, van ‘t Hoff compiled a contemporary encyclopædia of chemical kinetics. It is an extraordinary work, containing all that was previously known as well as a great deal that was entirely new. At the start of the section on chemical equilibrium he states (without proof) the thermodynamic equation, sometimes called the van ‘t Hoff isochore, which quantifies the displacement of equilibrium with temperature. In modern notation it reads:

where K_{c} is the equilibrium constant expressed in terms of concentrations, ΔH is the heat of reaction and T is the absolute temperature. In a footnote to this famous and thermodynamically exact equation, van ‘t Hoff builds a bridge from thermodynamics to kinetics by advancing the idea that a chemical reaction can take place in both directions, and that the thermodynamic equilibrium constant K_{c} is in fact the quotient of the kinetic velocity constants for the forward (k_{1}) and reverse (k_{-1}) reactions

Substituting this quotient in the original equation leads immediately to

van ‘t Hoff then argues that the rate constants will be influenced by two different energy terms E_{1} and E_{-1}, and splits the above into two equations

where the two energies are such that E_{1} – E_{-1} = ΔH

In the *Études*, van ‘t Hoff recognized that ΔH might or might not be temperature independent, and considered both possibilities. In the former case, he could integrate the equation to give the solution

From a starting point in thermodynamics, van ‘t Hoff engineered this kinetic equation through a characteristically self-assured thought process. And it was this equation that the equally self-assured Svante Arrhenius seized upon for his own purposes, expanding its application to explain the results of other researchers, and enriching it with his own idea for how the equation should be interpreted.

**– – – –**

**Activation energy**

It is a well-known result of the kinetic theory of gases that the average kinetic energy per mole of gas (E_{K}) is given by

Since the only variable on the RHS is the absolute temperature T, we can conclude that doubling the temperature will double the average kinetic energy of the molecules. This set Arrhenius thinking, because the eight sets of published data in his 1889 paper showed that the effect of temperature on the rates of chemical processes was generally much too large to be explained on the basis of how temperature affects the average kinetic energy of the molecules.

The clue to solving this mystery was provided by James Clerk Maxwell, who in 1860 had worked out the distribution of molecular velocities from the laws of probability. Maxwell’s distribution law enables the fraction of molecules possessing a kinetic energy exceeding some arbitrary value E to be calculated.

It is convenient to consider the distribution of molecular velocities in two dimensions instead of three, since the distribution law so obtained gives very similar results and is much simpler to apply. At absolute temperature T, the proportion of molecules for which the kinetic energy exceeds E is given by

where n is the number of molecules with kinetic energy greater than E, and N is the total number of molecules. This is exactly the exponential expression which occurs in the velocity constant equation derived by van ‘t Hoff from thermodynamic principles, which Arrhenius showed could be fitted to temperature dependence data from several published sources.

Compared with the average kinetic energy calculation, this exponential expression yields very different results. At 1000K, the fraction of molecules having a greater energy than, say, 80 KJ is 0.0000662, while at 2000K the fraction is 0.00814. So the temperature change which doubles the number of molecules with the average energy will increase the number of molecules with E > 80 KJ by a factor of more than a hundred.

Here was the clue Arrhenius was seeking to explain why increased temperature had such a marked effect on reaction rate. He reasoned it was because molecules needed sufficiently more energy than the average – the activation energy E – to undergo reaction, and that the fraction of these molecules in the reaction mix was an exponential function of temperature.

**– – – –**

**The meaning of A**

But back to the Arrhenius equation

A clue to the proper meaning of A is to note that e^(–E/RT) is dimensionless. The units of A are therefore the same as the units of k. But what are the units of k?

The answer depends on whether one’s interest area is kinetics or thermodynamics. In kinetics, the concentration of chemical species present at equilibrium is generally expressed as molar concentration, giving rise to a range of possibilities for the units of the velocity constant k.

In thermodynamics however, the dimensions of k are uniform. This is because the chemical potential of reactants and products in any arbitrarily chosen state is expressed in terms of activity a, which is defined as a ratio in relation to a standard state and is therefore dimensionless.

When the arbitrarily chosen conditions represent those for equilibrium, the equilibrium constant K is expressed in terms of reactant (aA + bB + …) and product (mM + nN + …) activities

where the subscript e indicates that the activities are those for the system at equilibrium.

As students we often substitute molar concentrations for activities, since in many situations the activity of a chemical species is approximately proportional to its concentration. But if an equation is arrived at from consideration of the thermodynamic equilibrium constant K – as the Arrhenius equation was – it is important to remember that the associated concentration terms are strictly dimensionless and so the reaction rate, and therefore the velocity constant k, and therefore A, has the units of frequency (t^-1).

OK, so back again to the Arrhenius equation

We have determined the dimensions of A; now let us turn our attention to the role of the dimensionless exponential factor. The values this term may take range between 0 and 1, and specifically when E = 0, e^(–E/RT) = 1. This allows us to assign a physical meaning to A since when E = 0, A = k. We can think of A as the velocity constant when the activation energy is zero – in other words when each collision between reactant molecules results in a reaction taking place.

Since there are zillions of molecular collisions taking place every second just at room temperature, any reaction in these circumstances would be uber-explosive. So the exponential term can be seen as a modifier of A whose value reflects the range of reaction velocity from extremely slow at one end of the scale (high E/low T) to extremely fast at the other (low E/high T).

**– – – –**

P Mander September 2016

]]>CarnotCycle is honored to announce that Rutgers, The State University of New Jersey, has included this thermodynamics blog on the reading list for students of physical chemistry. Founded in 1766, Rutgers is the eighth oldest college in the United States and is the largest institution for higher education in New Jersey.

CarnotCycle is committed to making topics in this area of science accessible to students worldwide. Thermodynamics has played – and continues to play – a major role in shaping our world. It can be a difficult subject, but time spent learning about thermodynamics is never wasted. It enriches knowledge and empowers the mind.

Link: http://andromeda.rutgers.edu/~huskey/345f17_lec.html

**– – – –**