In 17th century France, dice games were a popular and fashionable habit. All kinds of people played at dice – soldiers, sailors, socialites, aristocrats, celebrities … and professional gamblers. Every era in history has featured this latter group, often colourful characters with sharp minds living off their wits.

Such an individual was Antoine Gombaud (1607-1684), who had adopted the high-flown title of *Chevalier de Méré* and was known as a flambuoyant big spender in gambling circles. He also had good connections with Renaissance intellectuals, and was himself a notable Salon theorist.

Professional gamblers have one overriding aim in life, which is to win. This is easier said than done however as gambling necessarily involves games of chance, and chance is a fickle creature. So the professional gambler has to proceed with caution until he finds a wager whose odds are in his favor. Then he brings big money to the table, plays long and hard, and walks away a richer man.

**Calculating success**

This was how Gombaud operated. But unlike most other gamblers, he did not rely solely on experience to show him which bets were favorable. He had an analytical turn of mind, and had started to work out on the basis of mathematical principle whether a certain game had favorable odds.

He first applied his thinking to a popular dice game in which players wagered on a six appearing in four throws of a die. He correctly reasoned that since each number on a six-sided die was equally likely to occur, the chance of getting a six on a single throw must be 1/6. He then considered the chance of getting a six if a die were thrown four times instead of once. He reasoned that the chance of success would be four times greater since each throw represented a separate opportunity for a six to occur, and he calculated that chance as 4 x 1/6 = 2/3. In other words, the odds of winning were favorable (i.e. >1/2).

Long before the law of large numbers was formulated, Gombaud seems to have intuitively understood that these favorable odds meant that although the outcome of an individual game could not be predicted, success would be assured if enough games were played.

Gombaud made piles of money out of this game, thereby cementing belief in his method of mathematical analysis as a means of identifying a winning bet. Buoyed by this success he extended the same reasoning to another game where he calculated that the odds of winning were favorable. But an unpleasant surprise was in store.

**Unexpected losses**

Gombaud’s new focus of attention was a dice game in which players wagered on getting a double six in twenty four throws of two dice. He correctly reasoned that the chance of getting a double six in a single throw of two dice was 1/36. Then applying his formula he calculated the chance of success as 24 x 1/36 = 2/3, the same favorable odds as in the previous game.

Emboldened by this analysis, Gombaud brought a stack of money to the dice table to wager on getting double sixes. But his expectations did not materialise; in fact the more he played this game, the more his losses mounted. Gombaud simply could not understand it. Both games had exactly the same favorable odds. So why did he win at one and lose at the other?

Desperate for an explanation of his losses, he wrote in 1654 to one the foremost thinkers of his time, Blaise Pascal (1623-1662), who in turn shared the news of *“De Méré’s paradox”* with the profoundly talented amateur mathematician Pierre Fermat (1607-1665). The two began a legendary correspondence, out of which the theory of probability was born and the paradox was solved.

**Fallacious reasoning**

The work of Pascal and Fermat revealed Gombaud’s mistake in thinking that the chances of success with n throws could be calculated by multiplying the chance of success for a single throw by n.

Taking the first game as an example, Gombaud thought that after two throws of the die the chance of success doubled from 1/6 to 1/3. But a chart of all 36 possible outcomes shows that the total number of favorable outcomes (shown in gold) is not 12, but 11.

Pascal and Fermat no doubt saw that the ratio of favorable outcomes to total outcomes after two throws could be written as

and after three throws as

and so on. Since 5/6 was the chance of failure in a single throw, and the exponent was the number of throws, the formula for the chance of getting *at least one success in n throws* could be generalised as

where q is the chance of failure in a single throw. This was the correct formula for computing whether the odds were favorable or not. Contrast this with the formula Gombaud used

It is instructive to compare Gombaud’s incorrect formula with the correct one for the first game

and the second game

The correct figures in the final column make it clear why Gombaud won at the first game and lost at the second. They also show what a knife-edge situation it was. If the second game had been played with just one more throw (n=25, 1-q^n = 0.506) Gombaud would have won both games. There would have been no paradox to explain, and the genius minds of Pascal and Fermat might never have been applied to founding probability theory!

**– – – –**

**Probability profiles**

**Game 1**

**Game 2**

**– – – –**

P Mander November 2017

]]>As shown in previous posts on the CarnotCycle blog, it is possible to compute dew point temperature and absolute humidity (defined as water vapor density in g/m^3) from ambient temperature and relative humidity. This adds value to the output of RH&T sensors like the DHT22 pictured above, and extends the range of useful parameters that can be displayed or toggled on temperature-humidity gauges employing these sensors.

Meteorological opinion* suggests that dew point temperature is a more dependable parameter than relative humidity for assessing climate comfort especially during summer, while absolute humidity quantifies water vapor in terms of mass per unit volume. In effect this added parameter turns an ordinary temperature-humidity gauge into a gas analyzer.

*https://www.weather.gov/arx/why_dewpoint_vs_humidity

**– – – –**

**Hardware**

I used an Arduino Uno microprocessor and a wired DHT22 sensor with data output to a 16×2 liquid crystal display. Circuit components are uncomplicated: a 10 kΩ potentiometer, 220 Ω resistor and a few jumper and breadboard wires are all that is needed, power supplied by a 9V battery* after programming via USB.

*http://www.instructables.com/id/Powering-Arduino-with-a-Battery/

**– – – –**

**Circuitry**

I wired the LCD as per guidance on the Arduino website. The pot controls contrast on the LCD. The DHT22 was wired to take 5V from the breadboard power rail with sensor data routed to digital pin 7. The sensor version that I used (Adafruit AM2302) has a built-in 5.1 kΩ pull-up resistor.

**– – – –**

**Code**

The DHT22 has a sampling rate of 0.5 Hz which some regard as a weakness, but in the context of a temperature-humidity gauge the criticism is rather academic since it would serve no purpose to output data to the LCD at such a rapid rate. I set the display refresh to 30 seconds. Note the built-in option to display ambient temperature and dew point temperature in Celsius or Fahrenheit.

**– – – –**

**Experiment**

I used the unit to investigate the change in temperature and humidity parameters in a bathroom (enclosed volume 11.6 m^3) before and after operating the shower at a temperature of 40°C for about 5 minutes. The sensor was placed 60 cm above floor level at the midpoint of the room.

Here is the data display before the shower

and after the shower

The displayed data shows that bathroom temperature stayed constant during the experiment while the relative humidity increased markedly. This result could have been obtained with an ordinary temperature-humidity gauge, but the smart gauge gives additional information.

In contrast to the steady ambient temperature, the dewpoint temperature shows a sharp rise from a comfortable 11.8°C (53°F) to a humid 18.3°C (65°F). The absolute humidity data shows an even greater increase – a 50% hike in water vapor concentration from 10 to 15 grams per m^3 in a matter of minutes.

**– – – –**

© P Mander, June 2018

]]>A couple of blocks down from the Metro station *Jussieu* in Paris’s 5th arrondisement lies Rue Cuvier, which runs along the north-western edge of the botanic gardens which houses the Natural History Museum. The other side of the road is bordered by various institutes of the Sorbonne, notably UPMC (formerly Pierre and Marie Curie University).

The Curies have historical associations with a number of streets in the Latin Quarter, and Rue Cuvier in particular. Pierre Curie was born at No.16 and it was in a science faculty building in this street that the Curies conducted their fundamental research on radium between 1903 and 1914. The building still exists, shielded from public curiosity by a set of prison-style metal gates, and it was in this laboratory that the first pioneering research into what would later be recognized as nuclear energy was conducted in 1903.

Yet it was not the renowned husband-and-wife team which carried out this experiment. It was in fact Pierre Curie and his young graduate assistent Albert Laborde who did the work and reported it in Comptes Rendus in a note entitled *Sur la chaleur dégagée spontanément par les sels de radium* (On the spontaneous production of heat by radium salts). The note, which barely covers two pages, was published in March 1903.

The laboratory in Rue Cuvier where the Curies and Laborde worked was at No.12. Just across the street is No.57, which once housed the Appled Physics laboratory of the Natural History Museum. It was here in 1896 that Henri Becquerel serendipitously discovered the strange phenomenon of radioactivity.

Between that moment of discovery on one side of Rue Cuvier and Curie and Laborde’s remarkable experiment on the other, lay the years of backbreaking work in a shed in nearby Rue Vauquelin where the Curies, together with chemist Gustave Bémont, processed tons of waste from an Austrian uranium mine in order to extract a fraction of a gram* of the mysterious new element radium.

*the maximum amount of radium coexisting with uranium is in the ratio of their half-lives. This means that uranium ores can contain no more than 1 atom of radium for every 2.8 million atoms of uranium.

**– – – –**

**The Curie – Laborde experiment**

Pierre Curie and Albert Laborde were the first to make an experimental determination of the heat produced by radium because they were the first to have enough radium-enriched material to make the experiment practicable. It was a close-run thing though. Ernest Rutherford and Frederick Soddy had been busy working on radioactivity at McGill University in Canada since 1900, but they were hampered by lack of access to radium and were using much weaker thorium preparations. This situation would quickly change however when concentrated radium samples became available from Friedrich Giesel in Germany. By the summer of 1903, Soddy (now at University College London) and Rutherford would have their hands on Giesel’s supply. But Curie and Laborde had a head start, and they turned their narrow time advantage to good account.

**Methodology**

To determine the heat produced by their radium preparation, they used two different approaches – a thermoelectric method, and an ice calorimeter method.

This diagram of their thermoelectric device, taken from Mme Curie’s *Traité de Radioactivité (1910), Tome II, p272*, unfortunately lacks an explanation of the key, but the set-up essentially comprises a test ampoule containing the chloride salt of radium-enriched barium and a control ampoule of pure barium chloride. These are marked A and A’. The ampoules are placed in the cavities of brass blocks enclosed in inverted Dewar flasks D, D’ with some unstated packing material to keep the ampoules from falling down. The flasks are enclosed in containers immersed in a further medium-filled container E supported in a space enclosed by a medium F, all of which was presumably designed to ensure a constant temperature environment. The key feature is C and C’ which are iron-constantan thermocouples, embedded in the brass cavities, with their associated circuitry.

The current produced by the Seebeck effect resulting from the temperature difference between C and C’ was measured by a galvanometer. The radium ampoule was then replaced by an ampoule containing a platinum filament through which was passed a current whose heating effect was sufficient to obtain the same temperature difference. The equivalent rate of heat production by the radium ampoule could then be calulated using Joule’s law.

The second method used was a Bunsen calorimeter, which was known to be capable of very exact measurements using only a small quantity of the test substance. For details of the operational principleof this calorimeter, the reader is referred to this link:

http://thewaythetruthandthelife.net/index/2_background/2-1_cosmological/physics/j9.htm

The above diagram of the Bunsen calorimeter is taken from Mme Curie’s *Traité de Radioactivité (1910), Tome II, p273*.

**Results**

For most of their experiments, Curie and Laborde used 1 gram of a radium-enriched barium chloride preparation, which liberated approximately 14 calories (59 joules) of heat per hour. It was estimated from radioactivity measurements – no doubt using the quartz electrometer instrumentation invented by Curie – that the gram of test substance contained about one sixth of a gram of radium.

Measurements were also made on a 0.08 gram sample of pure radium chloride. These yielded results of the same order of magnitude without being absolutely in agreement. Curie and Laborde made it clear in their note that these were pathfinding experiments and that their aim was solely to demonstrate the fact of continuous, spontaneous emission of heat by radium and to give an approximate magnitude for the phenomenon. They stated:

*» 1 g of radium emits a quantity of heat of approximately 100 calories (420 joules) per hour.*

In other words, a gram of radium emitted enough heat in an hour to raise the temperature of an equal weight of water from freezing point to boiling point. And it was continuous emission, hour after hour for year after year, without any detectable change in the source material.

Curie and Laborde had quantified the capacity of radium to generate heat on a scale which was far beyond that known for any chemical reaction. And this heat was continuously produced at a constant rate, unaffected by temperature, pressure, light, magnetism, electricity or any other agency under human control.

The scientific world was astonished. This phenomenon seemed to defy the laws of thermodynamics and the question was immediately raised: Where was all this energy coming from?

**Speculation and insight**

In 1903, little was known about the radiation emitted by radioactive substances and even less about the atoms emitting them. The air-ionizing emissions had been grouped into three categories according to their penetrating abililities and deflection by a magnetic field, but the nature of the atom – with its nucleus and orbiting electrons – was a mystery yet to be unveiled.

Radioactivity had been discovered by Henri Becquerel as an accidental by-product of his main area of interest, optical luminescence – which is the emission of light of certain wavelengths following the absorption of light of other wavelengths. By association luminescence was seen as a possible explanation of radioactivity, that radioactive substances might be absorbing invisible cosmic energy and re-emitting it as ionizing radiation. But no progress was made on identifying a cosmic source.

Meanwhile, from her detailed analytical work that she began in 1898, Marie Curie had discovered that uranium’s radioactivity was independent of its physical state or its chemical combinations. She reasoned that radioactivity must be an atomic property. This was a crucial insight, which directed thinking towards the idea of conversion of mass into energy as an explanation of the continuous and prodigious production of heat by radium that Pierre Curie and Albert Laborde had observed.

One of the major theories in physics at this time was electromagnetic theory. Maxwell’s equations predicted that mass and energy should be mathematically related to each other, and it was by following this line of thought that Frederick Soddy, previously Ernest Rutherford’s collaborator in Canada, came to the conclusion that radium’s energy was obtained at the expense of its mass.

Writing in the very first Annual Report on the Progress of Chemistry, published by the Royal Society of Chemistry in 1904, Soddy said this:

*” … the products of the disintegration of radium must possess a total mass less than that originally possessed by the radium, and a part of the energy evolved must be considered as being derived from the change of a part of the mass into energy.”*

**– – – –**

**A different starting point**

While Pierre Curie and Albert Laborde were conducting their radium experiment in Rue Cuvier, Paris, Albert Einstein – a naturalized Swiss citizen who had recently completed his technical high school studies in Zurich – was working as a clerk at the Patent Office in Bern. Much of his work related to questions about signal transmission and time synchronization, and this may have influenced his own thoughts, since both of these issues feature prominently in the conceptual thinking that led Einstein to his theory of special relativity submitted in a paper entitled *Zur Elektrodynamik bewegter Körper* (On the electrodynamics of moving bodies) to *Annalen der Physik* on Friday 30th June 1905.

On the basis of electromagnetic theory, supplemented by the principle of relativity (in the restricted sense) and the principle of the constancy of the velocity of light contained in Maxwell’s equations, Einstein proves Doppler’s principle by demonstrating the following:

*Ist ein Beobachter relativ zu einer unendlich fernen Lichtquelle von der Frequenz ν mit der Geschwindigkeit v derart bewegt, daß die Verbindungslinie “Lichtquelle-Beobachter” mit der auf ein relativ zur Lichtquelle ruhendes Koordinatensystem bezogenen Geschwindigkeit des Beobachters den Winkel φ bildet, so ist die von dem Beobachter wahrgenommene Frequenz ν’ des Lichtes durch die Gleichung gegeben:*

*If an observer is moving with velocity v relatively to an infinitely distant light source of frequency ν, in such a way that the connecting “source-observer” line makes the angle φ with the velocity of the observer referred to a system of co-ordinates which is at rest relatively to the source of light, the frequency ν’ of the light perceived by the observer is given by:*

where Einstein uses V (not c) to represent the velocity of light. He then finds that both the frequency and energy (E) of a light packet (cf. E=hν) vary with the velocity of the observer in accordance with the same law:

It was to this equation Einstein returned in a paper entitled *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* (Does the inertia of a Body depend on its Energy Content?) submitted to *Annalen der Physik* on Wednesday 27th September 1905.

**– – – –**

**Mass-energy equivalence**

Einstein’s paper of September 1905 – the last of the famous set published in *Annalen der Physik* in that memorable year – is less than three pages long and constitutes little more than a footnote to the preceding 30-page relativity paper. Yet despite its brevity, it is a difficult and troublesome work over which Einstein brooded for some years.

The paper describes a thought experiment in which a body sends out a light packet in one direction, and simultaneously another light packet of equal energy in the opposite direction. The energy of the body before and after the light emission is determined in relation to two systems of co-ordinates, one at rest relative to the body (where the before-and-after energies are E_{0} and E_{1}) and one in uniform parallel translation at velocity v (where the before-and-after energies are H_{0} and H_{1}).

Einstein applies the law of conservation of energy, the principle of relativity and the above-mentioned energy equation to arrive at the following result for the rest frame and the frame in motion relative to the body, the light energy being represented by a capital L:

At this point, things start getting a little tricky. Einstein subtracts the rest frame energies from the moving frame energies for both the before-emission and after-emission cases, and then subtracts these differences:

These differences represent the before-emission kinetic energy (K_{0}) and after-emission kinetic energy (K_{1}) with respect to the moving frame

Since the right hand side is a positive quantity, the kinetic energy of the body diminishes as a result of the emission of light, even though its velocity v remains constant. To elucidate, Einstein performs a binomial expansion on the first term in the braces, although he makes no mention of the procedure; nor does he show the math. So this next bit is my own contribution:

Let (v/V)^{2} = x

The appropriate form of the binomial expansion is

Setting x = v^{2}/V^{2} and n = ½

The contents of the braces in the kinetic energy expression thus become

Now back to Einstein. At this point he introduces a new condition into the scheme of things, namely that the velocity v of the system of co-ordinates moving with respect to the body is much less than the velocity of light V. We are in the classical world of v<<V, and so Einstein allows himself to neglect magnitudes of fourth and higher orders in the above expansion. Hence he arrives at

This equation gives the amount of kinetic energy lost by the body after emitting a quantity L of light energy. In the classical world of v<<V the kinetic energy of the body is also given by ½mv^{2}, and since the velocity v is the same before and after the light emission, Einstein is led to identify the loss of kinetic energy in his thought experiment with a loss of mass:

*Gibt ein Körper die Energie L in Form von Strahlung ab, so verkleinert sich seine Masse um L/V ^{2}. Hierbei ist es offenbar unwesentlich, daß die dem Körper entzogene Energie gerade in Energie der Strahlung übergeht, so daß wir zu der allgemeineren Folgerung geführt werden: Die Masse eines Körpers ist ein Maß für dessen Energie-inhalt.*

*If a body gives off the energy L in the form of radiation, its mass diminishes by L/V ^{2}. The fact that the energy withdrawn from the body becomes energy of radiation evidently makes no difference, so that we are led to the more general conclusion that: The mass of a body is a measure of its energy content.*

**– – – –**

**Testing the theory**

When Einstein wrote *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* in 1905, he was certainly aware of the phenomenon of continuous heat emission by radium salts as measured by Curie and Laborde, and confirmed by several others in 1903 and 1904. In fact he saw in this a possible means of putting relativity theory to the test:

*Es ist nicht ausgeschlossen, daß bie Körpern, deren Energieinhalt in hohem Maße veränderlich ist (z. B. bei den Radiumsaltzen) eine Prüfung der Theorie gelingen wird.*

*It is not impossible that with bodies whose energy content is variable to a high degree (e.g. with radium salts) the theory may be successfully put to the test.*

In hindsight, it was unlikely that Einstein could have made this test work and he soon abandoned the idea. Not only would the mass difference have been extremely small, but also the process of nuclear decay was conceptually different to Einstein’s thought experiment. In Curie and Laborde’s calorimeter, the energy emitted by the body (radium nucleus) was not initially in the form of radiant energy; it was in the form of kinetic energy carried by an ejected alpha particle (helium nucleus) and a recoiling radon nucleus.

But Einstein had a knack of getting ahead of himself and ending up in the right place. The mass-energy equivalence relation he obtained from his imagined light-emitting body turned out to be valid also in relation to the kinetic energy of radioactive decay particles.

To see this in relation to Curie and Laborde’s experiment, consider the nuclear reaction equation

Here Q is the mass difference in atomic mass units (u) required to balance the equation:

Mass of Ra = 226.02536 u

Mass of Rn (222.01753) + He (4.00260) = 226.02013 u

Mass difference = Q = 0.00523 u

The kinetic energy equivalent of 1 u is 931.5 MeV

So Q = 4.87 MeV

The kinetic energy is shared by the ejected alpha particle and recoiling radon nucleus. Since the velocities are non-relativistic, this can be calculated on the basis of the momentum conservation law and the classical expression for kinetic energy. Given the masses of the Rn and He nuclei, their respective velocities must be in the ratio 4.00260 to 222.01753. Writing the kinetic energy expression as ½mv.v and recognizing that ½mv has the same magnitude for both nuclei, the kinetic energies of the Rn and He nuclei must also be in the ratio 4.00260 to 222.01753. The kinetic energy carried by the alpha particle is therefore

4.87 x 222.01753/226.02013 = 4.78 MeV

This result has been confirmed by experiment.

**– – – –**

**Links to original papers mentioned in this post**

Sur la chaleur dégagée spontanément par les sels de radium ; par MM. P. Curie et A. Laborde

Comptes Rendus, Tome 136, janvier – juin 1903

http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-3091&I=673&M=tdm

Zur Elektrodynamik bewegter Körper; von A. Einstein

Annalen der Physik 17 (1905) 891-921

https://archive.org/stream/annalenderphysi108unkngoog#page/n1020/mode/2up

Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig? von A. Einstein

Annalen der Physik 18 (1905) 639-641

https://archive.org/stream/annalenderphysi143unkngoog#page/n707/mode/2up

**– – – –**

**Postscript**

In *Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?* Einstein arrived at a general statement on the dependence of inertia on energy (Δm = ΔE/V^{2}, in today’s language E = mc^{2}) from the consideration of a special case. He was deeply uncertain about this result, and returned to it in two further papers in 1906 and 1907, concluding that a general solution was not possible at that time. He had to wait a few years to discover he was right. I include links to these papers for the sake of completeness.

Das Prinzip von der Erhaltung der Schwerpunktsbewegung und die Trägheit der Energie; von A. Einstein

Annalen der Physik 20 (1906) 627-633

http://myweb.rz.uni-augsburg.de/~eckern/adp/history/einstein-papers/1906_20_627-633.pdf

Über die vom Relativitätsprinzip geforderte Trägheit der Energie; von A. Einstein

Annalen der Physik 23 (1907) 371-384

http://myweb.rz.uni-augsburg.de/~eckern/adp/history/einstein-papers/1907_23_371-384.pdf

**– – – –**

P Mander June 2017

]]>Having read biographies of the Curies, Marie Curie’s doctoral thesis, and a number of scholarly articles about the radium phenomenon, I have come to the conclusion that the Marie Curie legend in popular culture tends to sideline her scientific achievements, focusing more on her imagined saintliness and perceived role as bringer of medical marvels than on her pioneering work as a physical chemist, in which her husband Pierre (above left) played an important facilitating role.

To my mind, the popular press of her day was largely responsible for the misconstruction of the Marie Curie legend, filling the public’s mind with the discovery of a miracle cure for cancer brought about by what it portrayed as an angelic young foreign-born mother slaving away in a dark shed in Paris for no wages.

The media frenzy around radium had consequences. Once radium production was established on a commercial scale, ignorant and unscrupulous marketers quickly morphed the Curies’ discovery of an element that glowed in the dark into revitalizing radium baths, radium drinking water, radium chocolate, radium toothpaste, radium cigarettes and even radium suppositories for restoring male potency while eradicating hemorrhoids:

The dreadful damage these products must have caused doesn’t bear thinking about. Sadly, the Curies themselves seemed carried along similarly radium-dazzled tracks. They failed to connect Pierre’s rapidly deteriorating health with exposure to radioactive emissions, while stoically accepting the painful damage to Marie’s hands as a price worth paying for the greater good they somehow imagined radium to represent.

The sensationalist aspects of the Curie legend, while an education in themselves, are however not the subject of this post. Physics and chemistry are the subjects here. When you look at Marie Curie as a physical chemist, and examine her contributions to the science of natural radioactivity, it is clear how crucial a role was played by the miracle machine designed and developed by Pierre Curie.

**– – – –**

**The Curie quartz electrometer apparatus**

The above diagram is taken from *Méthodes de Mesure employées en Radioactivité* published in Paris in 1911 by Albert Laborde, a graduate engineer who became Pierre Curie’s assistant in 1902. It shows the quartz electrometer apparatus developed by Pierre Curie and his brother Jacques for the precise measurement of very weak currents (of the order of tenths of picoamperes) following their discovery of piezoelectricity in 1880. It was this discovery that prompted the brothers Curie to build a calibrated electrostatic charge generator using a thin quartz lamella (center) to compensate and thus measure the leakage current from a charged capacitor (left) using a quadrant electrometer (right).

This apparatus was later adapted by Pierre Curie to allow accurate quantification of the tiny leakage currents produced in an ionization chamber by samples of radioactive material.

This is the experimental set-up that Marie Curie can be seen using in the header photograph, which dates from 1898.

**– – – –**

**Amazing coincidences**

When you consider the train of coincidences that led to Marie Curie’s choice of subject for her doctorate (*Recherches sur les substances radioactives*) it is nothing short of amazing. At the time she was looking around for suitable topic, five years after having journeyed from Poland to Paris to enroll as a Sorbonne student, Henri Becquerel had just accidentally discovered mysterious rays emanating from uranium which had the property of weakly ionizing air. This was in 1896. Just a year previously, Marie had married Pierre Curie who happened to possess the one instrument capable of accurately measuring small ionization currents, following his discovery of piezoelectricity sixteen years earlier.

Because uranic rays were a new phenomenon, Marie was saved the task of first researching the topic which otherwise would have entailed reading a lot of academic papers in unfamiliar French. This saving of time and effort attracted her to choose to study Becquerel’s uranic rays, something she admitted in later life. Furthermore she had no competition since Becquerel had shown little interest in pursuing his original finding – the big news at the time was X rays, discovered by Wilhelm Röntgen in 1895. No fewer than 1044 papers on X rays were published in 1896, when Becquerel first announced his discovery. Not surprisingly, nobody took any notice. Marie Curie had the field to herself.

**– – – –**

**The weight, the watch and the light spot**

This magnified portion of the header image shows Marie Curie as she sits at the quartz electrometer apparatus. Her right hand can be seen holding an analytical balance weight in a controlled manner while in her left hand the edge of a stopwatch can be seen. Her eyes are looking fixedly at a horizontal measuring scale above a light source (square hole) mounted on a wooden pedestal.

The light source is shining a beam onto the mirror of a quadrant electrometer out of view to the left. The light spot is reflected onto the horizontal scale (cf. diagram above) and Marie is endeavoring to keep the light spot stationary. She does this by gradually releasing the weight which is attached to the quartz lamella, thereby generating charge to compensate the ionization current produced by the radioactive sample in the ionization chamber also out of view to the left. The entire process of weight release is timed by a stopwatch. Once the weight is fully released the watch is stopped. The weight generates a specific amount of charge Q* on the quartz lamella during the measured time T. Hence Q/T is equal to the ionization current, which is directly proportional to the intensity of the ionizing radiation emitted by the sample, or to use the term Marie Curie coined, its radioactivity.

**The amount of charge Q is calculated from Q = W × K × L/B where W is the applied weight, K is the quartz specific constant, L is the lamella length and B is the lamella thickness.*

**– – – –**

**Thesis**

On Thursday 25th June 1903, at La Faculté des Sciences de Paris, Marie Curie presented her doctoral thesis to the examination committee, two of whose members were later to become Nobel laureates. The committee was impressed; in fact it expressed the view that her findings represented the greatest scientific contribution ever made in a doctoral thesis.

At the outset, Curie coined a new term – radioactivity – to describe the ionizing radiation emitted by the uranium compounds studied by Henri Becquerel. She announced her discovery that the element thorium also displays radioactivity. And she presented a method, using the quartz electrometer apparatus developed by Jacques and Pierre Curie, by which the intensity of radioactive emissions could be precisely quantified and expressed as ionization currents. This was a game-changing advance on the essentially qualitative methods that had been used hitherto e.g. electroscopes and photographic plates.

As one would expect, Curie began her experimental work with a systematic study of uranium and its compounds, measuring and tabulating their ionization currents. There was a considerable range from the largest to the smallest currents, and within the limits of experimental error it was evident that the ionization currents were proportional to the amounts of uranium present in the sample. The same was true for thorium.

From the chemist’s perspective this was a puzzling result. The properties of chemical compounds of the same element generally depend on what it is compounded with and the arrangement of atoms in the molecule. Yet here was a very different finding – the radioactivity Curie measured was independent of compounding or molecular structure.

Curie drew the conclusion that radioactivity was a property of the atom – *une propriété atomique* she called it. She wasn’t referring to the uranium atom or the thorium atom, but to the atom as a generalized material unit with an implied interior from which radioactive emissions issued. That is a profound conception, with which Marie Curie made a significant contribution to the advancement of physics.

And at this point in her thesis she hadn’t even mentioned radium.

**– – – –**

**New elements**

For the next part of her thesis, Marie Curie turned her attention to the study of uranium-containing minerals, one of which was the mineral pictured above. Today we call it uraninite but in Curie’s day it was called pitchblende. The sample she obtained was from a uranium mine near the town of Joachimsthal in Austria, now Jáchymov in the Czech Republic. She measured its ionization current and found it to be considerably higher than its uranium content warranted. If her radioactivity hypothesis was correct, there was only one explanation: pitchblende contained atoms that emitted much more intense radiation than uranium atoms, which meant that another radioactive substance must be present in the ore. Curie now had the task of finding it, and was joined in this quest by her husband Pierre and the chemist Gustave Bémont.

The quartz electrometer demonstrated its value yet again, since the various fractions derived from the pitchblende sample during chemical analysis could be tested for radioactivity. In this way, the radioactivity was followed to two fractions: one containing the post-transition metal bismuth* and another containing the alkaline earth metal element barium. The Curies announced their findings in July 1898, stating their belief that these fractions contained two previously unknown metal elements, and suggesting that if the existence of these metals were confirmed, the bismuth-like element should be called polonium and the barium-like element radium.

**unknown to the Curies, the uranium decay series actually produces two radioisotopes of bismuth along with the isotopes of polonium, so the presence of radioactivity in this fraction did not solely indicate the presence of a new element.*

**– – – –**

**The shed**

The heroic work which made Marie Curie a legend took place in a shed at the back of a Grande École* in Rue Vauquelin. In order to produce sufficient quantities to isolate the new elements and determine their atomic weights, tons of pitchblende were needed. This was because the maximum amounts of radium and polonium that can coexist in secular equilibrium with uranium are in the ratios of their respective half lives. This fact and the limited human resources available rendered any attempt to isolate polonium impossible, and the situation with radium was not much better. At best, a quantity of uranium ore containing 3 metric tons of elemental uranium is needed to extract 1 gram of radium at a yield of 100%. In the primitive conditions of the shed, obtaining a gram of radium meant processing 8 or 9 tons of uranium ore. One can only wonder at how Marie Curie found the physical and mental strength for such an arduous task.

*At the time, it was called *École supérieure de physique et de chimie industrielles de la ville de Paris*. Today it is called *ESCPI Paris*.

**– – – –**

**Did she deserve two Nobel Prizes?**

Marie Curie was awarded a quarter of the Nobel Prize for Physics in 1903 for her work on radioactivity. In 1911 she was the sole recipient of the Nobel Prize for Chemistry, awarded for the discovery of radium and polonium.

There can be no doubt about her credentials for the 1903 award, but some biographers have questioned whether the 1911 Prize was deserved, claiming that the discoveries of radium and polonium were part of the reason for the first prize.

As described in this post, the experimental evidence which Marie Curie set forth to reason that radioactivity is an atomic property was based solely on her experiments with uranium and thorium. Neither radium nor polonium had anything to do with it. On these grounds the claims of those biographers can be rejected.

Which leaves the question of under what circumstances the discovery of a new element qualifies for a Nobel Prize in chemistry. Clearly the discovery of a naturally radioactive element is not sufficient, otherwise Marguerite Perey – who worked as Marie Curie’s lab assistant and discovered francium in 1939 – would have qualified. Other aspects of the discovery need to be taken into account, and in 1911 there were many such aspects to Marie Curie’s discovery of radium and polonium, and the isolation of radium.

Reading the award citation, what comes across to me – albeit between the lines – is a recognition of the monumental personal effort and dedication involved in the discovery and characterization of these remarkable elements that led to the modern science of nuclear physics.

**– – – –**

P Mander June 2017

]]>A thermodynamic system doesn’t have to be big. Although thermodynamics was originally concerned with very large objects like steam engines for pumping out coal mines, thermodynamic thinking can equally well be applied to very small systems consisting of say, just a few atoms.

Of course, we know that very small systems play by different rules – namely quantum rules – but that’s ok. The rules are known and can be applied. So let’s imagine that our thermodynamic system is an idealized solid consisting of three atoms, each distinguishable from the others by its unique position in space, and each able to perform simple harmonic oscillations independently of the others. At the absolute zero of temperature, the system will have no thermal energy, one microstate and zero entropy, with each atom in its vibrational ground state.

Harmonic motion is quantized, such that if the energy of the ground state is taken as zero and the energy of the first excited state as ε, then 2ε is the energy of the second excited state, 3ε is the energy of the third excited state, and so on. Suppose that from its thermal surroundings our 3-atom system absorbs one unit of energy ε, sufficient to set one of the atoms oscillating. Clearly, one unit of energy can be distributed among three atoms in three different ways – 100, 010, 001 – or in more compact notation [100|3].

Now let’s consider 2ε of absorbed energy. Our system can do this in two ways, either by promoting one oscillator to its second excited state, or two oscillators to their first excited state. Each of these energy distributions can be achieved in three ways, which we can write [200|3], [110|3]. For 3ε of absorbed energy, there are three distributions: [300|3], [210|6], [111|1].

Summarizing the above information

Energy E (in units of ε) | Total microstates W | Ratio of successive W’s |

0 | 1 | |

1 | 3 | 3 |

2 | 6 | 2 |

3 | 10 | 1⅔ |

The summary shows that as E increases, so does W. This is to be expected, since as W increases, the entropy S (= k log W) increases. In other words E and S increase or decrease together; the ratio ∂E/∂S is always positive. Since ∂E/∂S = T, the finding that E and S increase or decrease together is equivalent to saying that the absolute temperature of the system is always positive.

**– – – –**

**Adding an extra particle**

It is instructive to compare the distribution of energy among three oscillators (N =3)*

E = 0: [000|1]

E = 1: [100|3]

E = 2: [200|3], [110|3]

E = 3: [300|3], [210|6], [111|1]

with the distribution among four oscillators (N = 4)*

E = 0: [0000|1]

E = 1: [1000|4]

E = 2: [2000|4], [1100|6]

E = 3: [3000|4], [2100|12], [1110|4]

*For any single distribution among N oscillators where n_{0}, n_{1},n_{2} … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

It is understood that 0! = 1. Derivation of the formula is given in Appendix I.

For both the 3-oscillator and 4-oscillator systems, the first excited state is never less populated than the second, and the second excited state is never less populated than the third. Population is graded downward and the ratios n_{1}/n_{0} > n_{2}/n_{1} > n_{3}/n_{2} are less than unity.

Example calculations for N = 4, E = 3:

Comparisons can also be made of a single ratio across distributions and between systems. For example the values of n_{1}/n_{0} for E = 0, 1, 2, 3 are

(N = 4) : 0, ⅓, ½, ⅗

(N = 3) : 0, ½, ⅔, ¾

Since for a macroscopic system

this implies that for a given value of E the 4-oscillator system is colder than the 3-oscillator system. The same conclusion can be reached by looking at the ratio of successive W’s for the 4-oscillator system sharing 0 to 3 units of thermal energy

Energy E (in units of ε) | Total microstates W | Ratio of successive W’s |

0 | 1 | |

1 | 4 | 4 |

2 | 10 | 2½ |

3 | 20 | 2 |

For the 4-oscillator system the ratios of successive W’s are larger than the corresponding ratios for the 3-oscillator system. The logarithms of these ratios are inversely proportional to the absolute temperature, so the larger the ratio the lower the temperature.

**– – – –**

**Finite differences**

The differences between successive W’s for a 4-oscillator system are the values for a 3-oscillator system

W for (N =4) : 1, 4, 10, 20

Differences : 3, 6, 10

Likewise the differences between successive W’s for a 3-oscillator system and a 2-oscillator system

W for (N =3) : 1, 3, 6, 10

Differences : 2, 3, 4

Likewise for the differences between successive W’s for a 2-oscillator system and a 1-oscillator system

W for (N =2) : 1, 2, 3, 4

Differences : 1, 1, 1

This implies that W for the 4-particle system can be expressed as a cubic in n, and that W for the 3-particle system can be expressed as a quadratic in n etc. Evaluation of coefficients leads to the following formula progression

For N = 1

For N = 2

For N = 3

For N = 4

It appears that in general

Since n = E/ε and ε = hν, the above equation can be written

For a system of oscillators this formula describes the functional dependence of W microstates on the size of the particle ensemble (N), its energy (E), the mechanical frequency of its oscillators (ν) and Planck’s constant (h).

**– – – –**

**Appendix I**

**Formula to be derived**

For any single distribution among N oscillators where n_{0}, n_{1},n_{2} … represent the number of oscillators in the ground state, first excited state, second excited state etc, the number of microstates is given by

**Derivation**

In combinatorial analysis, the above comes into the category of permutations of sets with the possible occurrence of indistinguishable elements.

Consider the distribution of 3 units of energy across 4 oscillators such that one oscillator has two units, another has the remaining one unit, and the other two oscillators are in the ground state: {2100}

If each of the four numbers was distinct, there would be 4! possible ways to arrange them. But the two zeros are indistinguishable, so the number of ways is reduced by a factor of 2! The number of ways to arrange {2100} is therefore 4!/2! = 12.

1 and 2 occur only once in the above set, and the occurrence of 3 is zero. This does not result in a reduction in the number of possible ways to arrange {2100} since 1! = 1 and 0! = 1. Their presence in the denominator will have no effect, but for completeness we can write

4!/2!1!1!0!

to compute the number of microstates for the single distribution E = 3, N = 4, {2100} where n_{0} = 2, n_{1} = 1, n_{2} = 1 and n_{3} = 0.

In the general case, the formula for the number of microstates for a single energy distribution of E among N oscillators is

where the terms in the denominator are as defined above.

**– – – –**

P Mander April 2016

]]>The Arrhenius equation explains why chemical reactions generally go much faster when you heat them up. The equation was actually first given by the Dutch physical chemist JH van ‘t Hoff in 1884, but it was the Swedish physical chemist Svante Arrhenius (pictured above) who in 1889 interpreted the equation in terms of activation energy, thereby opening up an important new dimension to the study of reaction rates.

**– – – –**

**Temperature and reaction rate**

The systematic study of chemical kinetics can be said to have begun in 1850 with Ludwig Wilhelmy’s pioneering work on the kinetics of sucrose inversion. Right from the start, it was realized that reaction rates showed an appreciable dependence on temperature, but it took four decades before real progress was made towards quantitative understanding of the phenomenon.

In 1889, Arrhenius penned a classic paper in which he considered eight sets of published data on the effect of temperature on reaction rates. In each case he showed that the rate constant could be represented as an explicit function of the absolute temperature:

where both A and C are constants for the particular reaction taking place at temperature T. In his paper, Arrhenius listed the eight sets of published data together with the equations put forward by their respective authors to express the temperature dependence of the rate constant. In one case, the equation – stated in logarithmic form – was identical to that proposed by Arrhenius

where T is the absolute temperature and a and b are constants. This equation was published five years before Arrhenius’ paper in a book entitled *Études de Dynamique Chimique*. The author was J. H. van ‘t Hoff.

**– – – –**

**Dynamic equilibrium**

In the *Études* of 1884, van ‘t Hoff compiled a contemporary encyclopædia of chemical kinetics. It is an extraordinary work, containing all that was previously known as well as a great deal that was entirely new. At the start of the section on chemical equilibrium he states (without proof) the thermodynamic equation, sometimes called the van ‘t Hoff isochore, which quantifies the displacement of equilibrium with temperature. In modern notation it reads:

where K_{c} is the equilibrium constant expressed in terms of concentrations, ΔH is the heat of reaction and T is the absolute temperature. In a footnote to this famous and thermodynamically exact equation, van ‘t Hoff builds a bridge from thermodynamics to kinetics by advancing the idea that a chemical reaction can take place in both directions, and that the thermodynamic equilibrium constant K_{c} is in fact the quotient of the kinetic velocity constants for the forward (k_{1}) and reverse (k_{-1}) reactions

Substituting this quotient in the original equation leads immediately to

van ‘t Hoff then argues that the rate constants will be influenced by two different energy terms E_{1} and E_{-1}, and splits the above into two equations

where the two energies are such that E_{1} – E_{-1} = ΔH

In the *Études*, van ‘t Hoff recognized that ΔH might or might not be temperature independent, and considered both possibilities. In the former case, he could integrate the equation to give the solution

From a starting point in thermodynamics, van ‘t Hoff engineered this kinetic equation through a characteristically self-assured thought process. And it was this equation that the equally self-assured Svante Arrhenius seized upon for his own purposes, expanding its application to explain the results of other researchers, and enriching it with his own idea for how the equation should be interpreted.

**– – – –**

**Activation energy**

It is a well-known result of the kinetic theory of gases that the average kinetic energy per mole of gas (E_{K}) is given by

Since the only variable on the RHS is the absolute temperature T, we can conclude that doubling the temperature will double the average kinetic energy of the molecules. This set Arrhenius thinking, because the eight sets of published data in his 1889 paper showed that the effect of temperature on the rates of chemical processes was generally much too large to be explained on the basis of how temperature affects the average kinetic energy of the molecules.

The clue to solving this mystery was provided by James Clerk Maxwell, who in 1860 had worked out the distribution of molecular velocities from the laws of probability. Maxwell’s distribution law enables the fraction of molecules possessing a kinetic energy exceeding some arbitrary value E to be calculated.

It is convenient to consider the distribution of molecular velocities in two dimensions instead of three, since the distribution law so obtained gives very similar results and is much simpler to apply. At absolute temperature T, the proportion of molecules for which the kinetic energy exceeds E is given by

where n is the number of molecules with kinetic energy greater than E, and N is the total number of molecules. This is exactly the exponential expression which occurs in the velocity constant equation derived by van ‘t Hoff from thermodynamic principles, which Arrhenius showed could be fitted to temperature dependence data from several published sources.

Compared with the average kinetic energy calculation, this exponential expression yields very different results. At 1000K, the fraction of molecules having a greater energy than, say, 80 KJ is 0.0000662, while at 2000K the fraction is 0.00814. So the temperature change which doubles the number of molecules with the average energy will increase the number of molecules with E > 80 KJ by a factor of more than a hundred.

Here was the clue Arrhenius was seeking to explain why increased temperature had such a marked effect on reaction rate. He reasoned it was because molecules needed sufficiently more energy than the average – the activation energy E – to undergo reaction, and that the fraction of these molecules in the reaction mix was an exponential function of temperature.

**– – – –**

**The meaning of A**

But back to the Arrhenius equation

I have always thought that calling the constant A the ‘pre-exponential factor’ is a singularly pointless label. One could equally write the equation as

and call it the ‘post-exponential factor’. The position of A in relation to the exponential factor has no relevance.

A clue to the proper meaning of A is to note that e^(–E/RT) is dimensionless. The units of A are therefore the same as the units of k. But what are the units of k?

The answer depends on whether one’s interest area is kinetics or thermodynamics. In kinetics, the concentration of chemical species present at equilibrium is generally expressed as molar concentration, giving rise to a range of possibilities for the units of the velocity constant k.

In thermodynamics however, the dimensions of k are uniform. This is because the chemical potential of reactants and products in any arbitrarily chosen state is expressed in terms of activity a, which is defined as a ratio in relation to a standard state and is therefore dimensionless.

When the arbitrarily chosen conditions represent those for equilibrium, the equilibrium constant K is expressed in terms of reactant (aA + bB + …) and product (mM + nN + …) activities

where the subscript e indicates that the activities are those for the system at equilibrium.

As students we often substitute molar concentrations for activities, since in many situations the activity of a chemical species is approximately proportional to its concentration. But if an equation is arrived at from consideration of the thermodynamic equilibrium constant K – as the Arrhenius equation was – it is important to remember that the associated concentration terms are strictly dimensionless and so the reaction rate, and therefore the velocity constant k, and therefore A, has the units of frequency (t^-1).

OK, so back again to the Arrhenius equation

We have determined the dimensions of A; now let us turn our attention to the role of the dimensionless exponential factor. The values this term may take range between 0 and 1, and specifically when E = 0, e^(–E/RT) = 1. This allows us to assign a physical meaning to A since when E = 0, A = k. We can think of A as the velocity constant when the activation energy is zero – in other words when each collision between reactant molecules results in a reaction taking place.

Since there are zillions of molecular collisions taking place every second just at room temperature, any reaction in these circumstances would be uber-explosive. So the exponential term can be seen as a modifier of A whose value reflects the range of reaction velocity from extremely slow at one end of the scale (high E/low T) to extremely fast at the other (low E/high T).

**– – – –**

P Mander September 2016

]]>CarnotCycle is honored to announce that Rutgers, The State University of New Jersey, has included this thermodynamics blog on the reading list for students of physical chemistry. Founded in 1766, Rutgers is the eighth oldest college in the United States and is the largest institution for higher education in New Jersey.

CarnotCycle is committed to making topics in this area of science accessible to students worldwide. Thermodynamics has played – and continues to play – a major role in shaping our world. It can be a difficult subject, but time spent learning about thermodynamics is never wasted. It enriches knowledge and empowers the mind.

Link: http://andromeda.rutgers.edu/~huskey/345f17_lec.html

**– – – –**

CarnotCycle is a thermodynamics blog but occasionally its enthusiasm spills over into other subjects, as is the case here.

**– – – –**

When one considers the great achievements in radioactivity research made at the start of the 20th century by Ernest Rutherford and his team at the Victoria University, Manchester it seems surprising how little progress they made in finding an answer to the question posed above.

They knew that radioactivity was unaffected by any agency applied to it (even temperatures as low as 20K), and since the radioactive decay law discovered in 1902 by Rutherford and Soddy was an exponential function associated with probabilistic behavior, it was reasonable to think that radioactivity might be a random process. Egon von Schweidler’s work pointed firmly in this direction, and the Geiger-Nuttall relation, formulated by Hans Geiger and John Nuttall at the Manchester laboratory in 1911 and reformulated in 1912 by Richard Swinne in Germany, laid a mathematical foundation on which to construct ideas. Yet despite these pointers, Rutherford wrote in 1912 that *“it is difficult to offer any explanation of the causes operating which lead to the ultimate disintegration of the atom”*.

The phrase *“causes operating which lead to”* indicates that Rutherford saw the solution in terms of cause and effect. Understandably so, since he came from an age where probability was regarded as a measure of uncertainty about exact cause, rather than something reflecting a naturally indeterministic process. C.P. Snow once said of Rutherford, *“He thought of atoms as though they were tennis balls”*. And therein lay the essence of his problem: he didn’t have the right kind of mind to answer this kind of question.

But someone else did, namely the pioneer who introduced the term radioactivity and gave it a quantifiable meaning – Maria Sklodowska, better known under her married name Marie Curie.

**– – – –**

**Mme. Curie’s idea**

When all the great men of science (and one woman) convened for the second Solvay Conference in 1913, the hot topic of the day was the structure of the atom. Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab had recently conducted their famous particle scattering experiment, enabling Rutherford to construct a model of the atom with a central nucleus where its positive charge and most of its mass were concentrated. Rutherford and his student Thomas Royds had earlier conducted their celebrated experiment which identified the alpha particle as a helium nucleus, so the attention now focused on trying to explain the process of alpha decay.

It was Marie Curie who produced the most fruitful idea, foreshadowing the quantum mechanical interpretation developed in the 1920s. Curie suggested that alpha decay could be likened to a particle bouncing around inside a box with a small hole through which the particle could escape. This would constitute a random event; with a large number of boxes these events would follow the laws of probability, even though the model was conceptually based on simple kinetics.

Now it just so happened that a probability distribution based on exactly this kind of random event had already been described in an academic paper, published in 1837 and rather curiously entitled *Recherches sur la probabilité des jugements en matière criminelle et matière civile* (Research on the probability of judgments in criminal and civil cases). The author was the French mathematician Siméon Denis Poisson (1781-1840).

**– – – –**

**The Poisson distribution**

At the age of 57, just three years before his death, Poisson turned his attention to the subject of court judgements, and in particular to miscarriages of justice. In probabilistic terms, Poisson was considering a large number of trials (excuse the pun) involving just two outcomes – a correct or an incorrect judgement. And with many years of court history on the public record, Poisson had the means to compute a time-averaged figure for the thankfully rare judicial failures.

In his 1837 paper Poisson constructed a model which regarded an incorrect judgement as a random event which did not influence any other subsequent judgement – in other words it was an independent random event. He was thus dealing with a random variable in the context of a binomial experiment with a large number of trials (n) and a small probability (p), whose product (pn) he asserted was finite and equal to µ, the mean number of events occurring in a given number of dimensional units (in this case, time).

In summary, Poisson started with the binomial probability distribution

where p is the probability of success and q is the probability of failure, in which successive terms of the binomial expansion give the probability of the event occurring exactly r times in n trials

Asserting µ = pn, he evaluated P(r) as n goes to infinity and found that

This is the general representation of each term in the Poisson probability distribution

which can be seen from

As indicated above, the mean µ is the product of the mean per unit dimension and the number of dimensional units. In the case of radioactivity, µ = λt where λ is the decay constant and t is the number of time units

If we set t equal to the half-life t_{½} the mean µ will be λt_{½} = ln 2. Mapping probabilities for the first few terms of the distribution yields

Unlike the binomial distribution, the Poisson distribution is not symmetric; the maximum does not correspond to the mean. In the case of µ = ln2 the probability of no decays (r = 0) is exactly a half, as can be seen from

At this point we turn to another concept introduced by Poisson in his paper which was taken further by the Russian mathematician P.L. Chebyshev – namely the law of large numbers. In essence, this law says that if the probability of an event is p, the average number of occurrences of the event approaches p as the number of independent trials increases.

In the case of radioactive decay, the number of independent trials (atoms) is extremely large: a µg sample of Cesium 137 for example will contain around 10^15 nuclei. In the case of µ = ln2 the law of large numbers means that the average number of atoms remaining intact after the half-life period will be half the number of atoms originally present in the sample.

The Poisson distribution correctly accounts for half-life behavior, and has been successfully applied to counting rate experiments and particle scattering. There is thus a body of evidence to support the notion that radioactive decay is a random event to which the law of large numbers applies, and is therefore not a phenomenon that requires explanation in terms of cause and effect.

**– – – –**

**Geiger and Nuttall**

Despite Ernest Rutherford’s protestations that atomic disintegration defied explanation, it was in fact Rutherford who took the first step along the path that would eventually lead to a quantum mechanical explanation of α-decay. In 1911 and again in 1912, Rutherford communicated papers by two of his Manchester co-workers, physics lecturer Hans Geiger (of Geiger counter fame) and John Nuttall, a graduate student.

Rutherford’s team at the Physical Laboratories was well advanced with identifying radioactive decay products, several of which were α-emitters. It had been noticed that α-emitters with more rapid decay rates had greater α-particle ranges. Geiger and Nuttall investigated this phenomenon, and when they plotted the logarithms of the decay constants (they called them transformation constants) against the logarithms of the corresponding α-particle ranges for decay products in the uranium and actinium series they got this result (taken from their 1911 paper):

This implies the existence of a relationship log λ = A + B log R, where A has a characteristic value for each series and B has the same value for both series. Curiously, Geiger and Nuttall did not express the straight lines in mathematical terms in either of their papers; they were more interested in using the lines to calculate the immeasurably short periods of long-range α-emitters. But they made reference in their 1912 paper to somebody who had *“recently shown that the relation between range and transformation constant can be expressed in another form”*.

That somebody was the German physicist Richard Swinne (1885-1939) who sent a paper entitled *Über einige zwischen den radioaktiven Elementen bestehene Beziehungen* (On some relationships between the radioactive elements) to *Physikalische Zeitschrift*, which the journal received on Tuesday 5th December 1911 and published in volume XIII, 1912.

The other form that Swinne had found, which he claimed to represent the experimental data at least as well as the (unstated) formula of Geiger and Nuttall, was log λ = a + bv^n, where a and b are constants and v is the particle velocity.

When it came to n, Swinne was rangefinding: he tried various values of n and found that *“n kann am besten gleich 1 gesetzt werden”*; he was thus edging towards what we now call the Geiger-Nuttall law, namely that the logarithm of the α-emitter’s half-life is inversely proportional to the square root of the α-particle’s kinetic energy

**– – – –**

**Gurney and Condon, and Gamow**

In 1924, the British mathematician Harold Jeffreys developed a general method of approximating solutions to linear, second-order differential equations. This method, rediscovered as the WKB approximation in 1926, was applied to the Schrödinger equation first published in that year and resulted in the discovery of the phenomenon known as quantum tunneling.

It was this strange effect, by which a particle with insufficient energy to surmount a potential barrier can effectively tunnel through it (the dotted line DB) that was seized upon in 1928 by Ronald Gurney and Edward Condon at Princeton – and independently by George Gamow at Göttingen – as a way of explaining alpha decay. Gurney and Condon’s explanation of alpha emission was published in *Nature* in an article entitled *Wave Mechanics and Radioactive Disintegration*, while Gamow’s considerably more academic (and mathematical) paper *Zur Quantentheorie des Atomkernes* was published in *Zeitschrift für Physik*.

In the quantum mechanical treatment, the overall rate of emission (i.e. the decay constant λ) is the product of a frequency factor f – the rate at which an alpha particle appears at the inside wall of the nucleus – multiplied by a transmission coefficient T, which is the (independent) probability that the alpha particle tunnels through the barrier. Thus

At this point it is instructive to recall Marie Curie’s particle-in-a-box idea, a concept which involves the product of two quantities: a large number of escape attempts and a small probability of escape.

The frequency factor f – or escape attempt rate – is estimated as the particle velocity v divided by the distance across the nucleus (2R) where R is the radius

Here, V_{0} is the potential well depth, Q_{α} is the alpha particle kinetic energy and µ is the reduced mass. The escape attempt rate is quite large, usually of the order of 10^{21} per second. By contrast the probability of alpha particle escape is extremely small. In calculating a value for T, Gamow introduced the Gamow factor 2G where

Typically the Gamow factor is very large (2G = 60-120) which makes T very small (T = 10^{-55}-10^{-27}).

Combining the equations

or

which is the Geiger-Nuttall law.

The work of Gurney, Condon and Gamow provided a convincing theoretical explanation of the Geiger-Nuttall law on the basis of quantum mechanics and Marie Curie’s hunch, and put an end to the classical notions of Rutherford’s generation that radioactive decay required explanation in terms of cause and effect.

So to return to the question posed at the head of this post – *What determines the moment at which a radioactive atom decays?* – the answer is chance. And the law of large numbers.

**– – – –**

**An important consequence**

The successful application of quantum tunneling to alpha particle emission had an important consequence, since it suggested to Gamow that the same idea could be applied in reverse i.e. that projectile particles with lower energy might be able to penetrate the nucleus through quantum tunneling. This led Gamow to suggest to John Cockroft, who was conducting atom-smashing experiments with protons, that protons with more moderate speeds could be used. Gamow’s suggestion proved correct, and the success of these trials ushered in a new era of intensive development in nuclear physics.

**– – – –**

**Links to original papers mentioned in this post**

G. Gamow (1928) *Zur Quantentheorie des Atomkernes*, Zeitschrift für Physik; 51: 204-212

https://link.springer.com/article/10.1007/BF01343196

H. Geiger and J.M. Nuttall (1911) *The ranges of the α particles from various radioactive substances and a relation between range and period of transformation*, Phil Mag; 22: 613-621

https://archive.org/stream/londonedinburg6221911lond#page/612/mode/2up

H. Geiger and J.M. Nuttall (1912) *The ranges of α particles from uranium*, Phil Mag; 23: 439-445

https://archive.org/stream/londonedinburg6231912lond#page/438/mode/2up

R.W. Gurney and E.U. Condon (1928) *Wave Mechanics and Radioactive disintegration*, Nature; 122 (Sept. 22): 439

http://www.nature.com/physics/looking-back/gurney/index.html

R. Swinne (1912) *Über einige zwischen den radioaktiven Elementen bestehene Beziehungen*, Physikalische Zeitschrift; XIII: 14-21

https://babel.hathitrust.org/cgi/pt?id=mdp.39015023176806;view=1up;seq=52;size=125

**– – – –**

P Mander August 2017

]]>There has been a fair amount of interest in my formula which computes AH from measured RH and T, since it adds value to the output of RH&T sensors. To further extend this value, I have developed another formula which computes dew point temperature T_{D} from measured RH and T.

Formula for computing dew point temperature T_{D}

In this formula (P Mander 2017) the measured temperature T and the computed dew point temperature T_{D} are expressed in degrees Celsius, and the measured relative humidity RH is expressed in %

gif format (decimal separator = .)

gif format (decimal separator = ,)

jpg format (decimal separator = .)

jpg format (decimal separator = ,)

**– – – –**

**Strategy for computing T _{D} from RH and T**

1. The dew point temperature T_{D} is defined in the following relation where RH is expressed in %

2. To obtain values for P_{sat}, we can use the Bolton formula^{[REF, eq.10]} which generates saturation vapor pressure P_{sat} (hectopascals) as a function of temperature T (Celsius)

These formulas are stated to be accurate to within 0.1% over the temperature range –30°C to +35°C

3. Substituting in the first equation yields

Taking logarithms

Rearranging

Separating T_{D} terms on one side yields

**– – – –**

**Spreadsheet formula for computing T _{D} from RH and T**

**1)** Set up data entry cells for RH in % and T in degrees Celsius.

**2)** Depending on whether your spreadsheet uses a full point (.) or comma (,) for the decimal separator, copy the appropriate formula below and paste it into the computation cell for T_{D}.

Formula for T_{D} (decimal separator = .)

=243.5*(LN(RH/100)+((17.67*T)/(243.5+T)))/(17.67-LN(RH/100)-((17.67*T)/(243.5+T)))

Formula for T_{D} (decimal separator = ,)

=243,5*(LN(RH/100)+((17,67*T)/(243,5+T)))/(17,67-LN(RH/100)-((17,67*T)/(243,5+T)))

**3)** Replace T and RH in the formula with the respective cell references. (see comment)

Your spreadsheet is now complete. Enter values for RH and T, and the T_{D} computation cell will return the dew point temperature. If an object whose temperature is at or below this temperature is present in the local space, the thermodynamic conditions are satisfied for water vapor to condense (or freeze if T_{D} is below 0°C) on the surface of the object.

**– – – –**

P Mander August 2017

]]>From the perspective of classical thermodynamics, osmosis has a rather unclassical history. Part of the reason for this, I suspect, is that osmosis was originally categorised under the heading of biology. I can remember witnessing the first practical demonstration of osmosis in a biology class, the phenomenon being explained in terms of pores (think invisible holes) in the membrane that were big enough to let water molecules through, but not big enough to let sucrose molecules through. It was just like a kitchen sieve, we were told. It lets the fine flour pass through but not clumps. This was very much the method of biology in my day, explaining things in terms of imagined mechanism and analogy.

And it wasn’t just in my day. In 1883, JH van ‘t Hoff, an able theoretician and one of the founders of the new discipline of physical chemistry, became suddenly convinced that solutions and gases obeyed the same fundamental law, pv = RT. Imagined mechanism swiftly followed. In van ‘t Hoff’s interpretation, osmotic pressure depended on the impact of solute molecules against the semipermeable membrane because solvent molecules, being present on both sides of the membrane through which they could freely pass, did not enter into consideration.

It all seemed very plausible, especially when van ‘t Hoff used the osmotic pressure measurements of the German botanist Wilhelm Pfeffer to compute the value of R in what became known as the van ‘t Hoff equation

where Π is the osmotic pressure, and found that the calculated value for R was almost identical with the familiar gas constant. There really did seem to be a parallelism between the properties of solutions and gases.

The first sign that there was anything amiss with the so-called gaseous theory of solutions came in 1891 when van ‘t Hoff’s close colleague Wilhelm Ostwald produced unassailable proof that osmotic pressure is independent of the nature of the membrane. This meant that hypothetical arguments as to the cause of osmotic pressure, such as van ‘t Hoff had used as the basis of his theory, were inadmissible.

A year later, in 1892, van ‘t Hoff changed his stance by declaring that the mechanism of osmosis was unimportant. But this did not affect the validity of his osmotic pressure equation ΠV = RT. After all, it had been shown to be in close agreement with experimental data for very dilute solutions.

It would be decades – the 1930s in fact – before the van ‘t Hoff equation’s formal identity with the ideal gas equation was shown to be coincidental, and that the proper thermodynamic explanation of osmotic pressure lay elsewhere.

But long before the 1930s, even before Wilhelm Pfeffer began his osmotic pressure experiments upon which van ‘t Hoff subsequently based his ideas, someone had already published a thermodynamically exact rationale for osmosis that did not rely on any hypothesis as to cause.

That someone was the American physicist Josiah Willard Gibbs. The year was 1875.

**– – – –**

**Osmosis without mechanism**

It is a remarkable feature of Gibbs’ *On the Equilibrium of Heterogeneous Substances* that having introduced the concept of chemical potential, he first considers osmotic forces before moving on to the fundamental equations for which the work is chiefly known. The reason is Gibbs’ insistence on logical order of presentation. The discussion of chemical potential immediately involves equations of condition, among whose different causes are what Gibbs calls a diaphragm, i.e. a semipermeable membrane. Hence the early appearance of the following section

In equation 77, Gibbs presents a new way of understanding osmotic pressure. He makes no hypotheses about how a semipermeable membrane might work, but simply states the equations of condition which follow from the presence of such a membrane in the kind of system he describes.

This frees osmosis from considerations of mechanism, and explains it solely in terms of differences in chemical potential in components which can pass the diaphragm while other components cannot.

In order to achieve equilibrium between say a solution and its solvent, where only the solvent can pass the diaphragm, the chemical potential of the solvent in the fluid on both sides of the membrane must be the same. This necessitates applying additional pressure to the solution to increase the chemical potential of the solvent in the solution so it equals that of the pure solvent, temperature remaining constant. At equilibrium, the resulting difference in pressure across the membrane is the osmotic pressure.

*Note that increasing the pressure always increases the chemical potential since*

*is always positive (V _{1} is the partial molar volume of the solvent in the solution).*

**– – – –**

**Europe fails to notice (almost)**

Gibbs published *On the Equilibrium of Heterogeneous Substances* in *Transactions of the Connecticut Academy*. Choosing such an obscure journal (seen from a European perspective) clearly would not attract much attention across the pond, but Gibbs had a secret weapon. He had a mailing list of the world’s greatest scientists to which he sent reprints of his papers.

One of the names on that list was James Clerk Maxwell, who instantly appreciated Gibbs’ work and began to promote it in Europe. On Wednesday 24 May 1876, the year that *‘Equilibrium’* was first published, Maxwell gave an address at the South Kensington Conferences in London on the subject of Gibbs’ development of the doctrine of available energy on the basis of his new concept of the chemical potentials of the constituent substances. But the audience did not share Maxwell’s enthusiasm, or in all likelihood share his grasp of Gibbs’ ideas. When Maxwell tragically died three years later, Gibbs’ powerful ideas lost their only real champion in Europe.

It was not until 1891 that interest in Gibbs masterwork would resurface through the agency of Wilhelm Ostwald, who together with van ‘t Hoff and Arrhenius were the founders of the modern school of physical chemistry.

Although perhaps overshadowed by his colleagues, Ostwald had a talent for sensing the direction that the future would take and was also a shrewd judge of intellect – he instinctively felt that there were hidden treasures in Gibbs’ *magnum opus*. After spending an entire year translating *‘Equilibrium’* into German, Ostwald wrote to Gibbs:

*“The translation of your main work is nearly complete and I cannot resist repeating here my amazement. If you had published this work over a longer period of time in separate essays in an accessible journal, you would now be regarded as by far the greatest thermodynamicist since Clausius – not only in the small circle of those conversant with your work, but universally—and as one who frequently goes far beyond him in the certainty and scope of your physical judgment. The German translation, hopefully, will more secure for it the general recognition it deserves.”*

The following year – 1892 – another respected scientist sent a letter to Gibbs regarding *‘Equilibrium’*. This time it was the British physicist, Lord Rayleigh, who asked Gibbs:

*“Have you ever thought of bringing out a new edition of, or a treatise founded upon, your “Equilibrium of Het. Substances.” The original version though now attracting the attention it deserves, is too condensed and too difficult for most, I might say all, readers. The result is that as has happened to myself, the idea is not grasped until the subject has come up in one’s own mind more or less independently.”*

Rayleigh was probably just being diplomatic when he remarked that Gibbs’ treatise was *‘now attracting the attention it deserves’*. The plain fact is that nobody gave it any attention at all. Gibbs and his explanation of osmosis in terms of chemical potential was passed over, while European and especially British theoretical work centered on the more familiar and more easily understood concept of vapor pressure.

**– – – –**

**Gibbs tries again**

Although van ‘t Hoff’s osmotic pressure equation ΠV = RT soon gained the status of a law, the gaseous theory that lay behind it remained clouded in controversy. In particular, van ‘t Hoff’s deduction of the proportionality between osmotic pressure and concentration was an analogy rather than a proof, since it made use of hypothetical considerations as to the cause of osmotic pressure. Following Ostwald’s proof that these were inadmissible, the gaseous theory began to look hollow. A better theory was needed.

This was provided in 1896 by the British physicist, Lord Rayleigh, whose proof was free of hypothesis but did make use of Avogadro’s law, thereby continuing to assert a parallelism between the properties of solutions and gases. Heavyweight opposition to this soon materialized from the redoubtable Lord Kelvin. In a letter to *Nature* (21 January 1897) he charged that the application of Avogadro’s law to solutions had “manifestly no theoretical foundation at present” and further contended that

*“No molecular theory can, for sugar or common salt or alcohol, dissolved in water, tell us what is the true osmotic pressure against a membrane permeable to water only, without taking into account laws quite unknown to us at present regarding the three sets of mutual attractions or repulsions: (1) between the molecules of the dissolved substance; (2) between the molecules of water; (3) between the molecules of the dissolved substance and the molecules of water.”*

Lord Kelvin’s letter in *Nature* elicited a prompt response from none other than Josiah Willard Gibbs in America. Twenty-one years had now passed since James Clerk Maxwell first tried to interest Europe in the concept of chemical potentials. In Kelvin’s letter, with its feisty attack on the gaseous theory, Gibbs saw the opportunity to try again.

In his letter to *Nature* (18 March 1897), Gibbs opined that *“Lord Kelvin’s very interesting problem concerning molecules which differ only in their power of passing a diaphragm, seems only to require for its solution the relation between density and pressure”*, and highlighted the advantage of using his potentials to express van ‘t Hoff’s law:

*“It will be convenient to use certain quantities which may be called the potentials of the solvent and of the solutum, the term being thus defined: – In any sensibly homogeneous mass, the potential of any independently variable component substance is the differential coefficient of the thermodynamic energy of the mass taken with respect to that component, the entropy and volume of the mass and the quantities of its other components remaining constant. The advantage of using such potentials in the theory of semi-permeable diaphragms consists partly in the convenient form of the condition of equilibrium, the potential for any substance to which a diaphragm is freely permeable having the same value on both sides of the diaphragm, and partly in our ability to express van’t Hoff law as a relation between the quantities characterizing the state of the solution, without reference to any experimental arrangement.”*

But once again, Gibbs and his chemical potentials failed to garner interest in Europe. His timing was also unfortunate, since British experimental research into osmosis was soon to be stimulated by the aristocrat-turned-scientist Lord Berkeley, and this in turn would stimulate a new band of British theoreticians, including AW Porter and HL Callendar, who would base their theoretical efforts firmly on vapor pressure.

**– – – –**

**Things Come Full Circle**

As the new century dawned, van ‘t Hoff cemented his reputation with the award of the very first Nobel Prize for Chemistry *“in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions”.*

The osmotic pressure law was held in high esteem, and despite Lord Kelvin’s protestations, Britain was well disposed towards the Gaseous Theory of Solutions. The idea circulating at the time was that the refinements of the ideal gas law that had been shown to apply to real gases, could equally well be applied to more concentrated solutions. As Lord Berkeley put it in the introduction to a paper communicated to the Royal Society in London in May 1904:

*“The following work was undertaken with a view to obtaining data for the tentative application of van der Waals’ equation to concentrated solutions. It is evidently probable that if the ordinary gas equation be applicable to dilute solutions, then that of van der Waals, or one of analogous form, should apply to concentrated solutions – that is, to solutions having large osmotic pressures.”*

Lord Berkeley’s landmark experimental studies on the osmotic pressure of concentrated solutions called renewed attention to the subject among theorists, who now had some fresh and very accurate data to work with. Alfred Porter at University College London attempted to make a more complete theory by considering the compressibility of a solution to which osmotic pressure was applied, while Hugh Callendar at Imperial College London combined the vapor pressure interpretation of osmosis with the hypothesis that osmosis could be described as vapor passing through a large number of fine capillaries in the semipermeable membrane. This was in 1908.

So seventeen years after Wilhelm Ostwald conclusively proved that hypothetical arguments as to the cause of osmotic pressure were inadmissible, things came full circle with hypothetical arguments once more being advanced as to the cause of osmotic pressure.

And as for Gibbs, his ideas were as far away as ever from British and European Science. The osmosis papers of both Porter (1907) and Callendar (1908) are substantial in referenced content, but nowhere do either of them make any mention of Gibbs or his explanation of osmosis on the basis of chemical potentials.

There is a special irony in this, since in Callendar’s case at least, the scientific papers of J Willard Gibbs were presumably close at hand. Perhaps even on his office bookshelf. Because that copy of Gibbs’ works shown in the header photo of this post – it’s a 1906 first edition – was Hugh Callendar’s personal copy, which he signed on the front endpaper.

**– – – –**

**Epilogue**

Throughout this post, I have made repeated references to that inspired piece of thinking by Wilhelm Ostwald which conclusively demonstrated that osmotic pressure must be independent of the nature of the membrane.

Ostwald’s reasoning is so lucid and compelling, that one wonders why it didn’t put an end to speculation on osmotic mechanisms. But it didn’t, and hasn’t, and probably won’t.

Here is how Ostwald presented the argument in his own *Lehrbuch der allgemeinen Chemie* (1891). Enjoy.

*“… it may be stated with certainty that the amount of pressure is independent of the nature of the membrane, provided that the membrane is not permeable by the dissolved substance. To understand this, let it be supposed that two separating partitions, A and B, formed of different membranes, are placed in a cylinder (fig. 17). Let the space between the membranes contain a solution and let there be pure water in the space at the ends of the cylinder. Let the membrane A show a higher pressure, P, and the membrane B show a smaller pressure, p. At the outset, water will pass through both membranes into the inner space until the pressure p is attained, when the passage of water through B will cease, but the passage through A will continue. As soon as the pressure in the inner space has been thus increased above p, water will be pressed out through B. The pressure can never reach the value P; water must enter continuously through A, while a finite difference of pressures is maintained. If this were realised we should have a machine capable of performing infinite work, which is impossible. A similar demonstration holds good if p>P ; it is, therefore, necessary that P=p; in other words, it follows necessarily that osmotic pressure is independent of the nature of the membrane.”*

(English translation by Matthew Pattison Muir)

**– – – –**

P Mander July 2015

]]>