## Probability and Poker Dice

Posted: August 1, 2019 in probability
Tags: , ,

CarnotCycle is a thermodynamics blog but occasionally it takes five for recreational purposes

Poker dice is played with five dice each with playing card images – A K Q J 10 9 – on the six faces. There are a total of 6 × 6 × 6 × 6 × 6 = 7776 outcomes from throwing these dice, of which 94% are scoring hands and only 6% are busts.

Here are the number of outcomes for each hand and their probabilities

And here is the data presented as a pie chart

The percentage share of 5 of a kind (0.08%) is omitted due to its small size

A noticeable feature of the data is that the number of outcomes for 1 pair is exactly twice that for 2 pairs, and likewise the number of outcomes for Full house is exactly twice that for 4 of a kind. But when outcomes are calculated in the conventional way, it is not obvious why this is so.

Taking the first case, the conventional calculation runs as follows:

1 pair – There are 6C1 ways to choose which number will be a pair and 5C2 ways to choose which of five dice will be a pair, then there are 5C3 × 3! ways to choose the remaining three dice

2 pairs – There are 6C2 ways to choose which two numbers will be pairs, 5C2 ways to choose which of five dice will be the first pair and 3C2 ways to choose which of three dice will be the second pair. Then there are 4C1 ways to choose the last die

Conventional calculation gives no obvious indication of why there are twice as many outcomes for a 1-pair hand than a 2-pair hand.

But there is an alternative method of calculation which does make the difference clear.

– – – –

A different approach

Instead of starting with component parts, consider the hand as a whole and count the number (n) of different faces on view. The number of ways to choose n from six faces is 6Cn. Now multiply this by the number of ways of grouping the faces, which is given by n!/s! where s is the number of face groups sharing the same size. The number of face combinations for the hand is 6Cn × n!/s!
Since the dice are independent variables, each face combination is subject to permutation taking repetition of faces into account with the permutation-repetition formula 5!/n1! n2! ..  where n1, n2 etc are the number of repetitions of a given face.

Thus the total number of outcomes for any poker dice hand can be calculated with a single formula:

It is easy to see from the table why there are twice as many outcomes for a 1-pair hand than a 2-pair hand. The number of face combinations (6Cn × n!/s!) is the same in both cases but there are twice as many dice permutations for 1 pair. Similarly with Full house and 4 of a kind, the number of face combinations is the same but there are twice as many dice permutations for Full house.

– – – –

P Mander April 2018

## Boltzmann and Statistical Thermodynamics

Posted: November 1, 2018 in history of science, physics, thermodynamics
Tags: , , ,

I can’t think of a better introduction to this post than Ludwig Boltzmann gave in his Vorlesungen über Gastheorie (Lectures on Gas Theory, 1896):

“General thermodynamics proceeds from the fact that, as far as we can tell from our experiences up to now, all natural processes are irreversible. Hence according to the principles of phenomenology, the general thermodynamics of the second law is formulated in such a way that the unconditional irreversibility of all natural processes is asserted as a so-called axiom … [However] general thermodynamics (without prejudice to its unshakable importance) also requires the cultivation of mechanical models representing it, in order to deepen our knowledge of nature—not in spite of, but rather precisely because these models do not always cover the same ground as general thermodynamics, but instead offer a glimpse of a new viewpoint.”

Today, the work of Ludwig Boltzmann (1844-1906) is considered among the finest in physics. But in his own lifetime he faced considerable hostility from those of his contemporaries who did not believe in the atomic hypothesis. As late as 1900, the kinetic-molecular theory of heat developed by Maxwell and Boltzmann was being vigorously attacked by a school of scientists including Wilhelm Ostwald, who argued that since mechanical processes are reversible and heat conduction is not, thermal phenomena cannot be explained in terms of hidden, internal mechanical variables.

Boltzmann refuted this argument. Mechanical processes, he pointed out, are irreversible if the number of particles is sufficiently large. The spontaneous mixing of two gases is a case in point; it is known from experience that the process cannot spontaneously reverse – mixed gases don’t unmix. Today we regard this as self-evident, but in Boltzmann’s time his opponents did not believe in atoms or molecules; they considered matter to be continuous. So the attacks on Boltzmann’s theories continued.

Fortunately, this did not deter Boltzmann from pursuing his ideas, at least not to begin with. He saw that spontaneous processes could be explained in terms of probability, and that a system of many particles undergoing spontaneous change would assume – other things being equal – the most probable state, namely the one with the maximum number of arrangements. And this gave him a new way of viewing the equilibrium state.

One can see Boltzmann’s mind at work, thinking about particle systems in terms of permutations, in this quote from his Lectures on Gas Theory:

“From an urn, in which many black and an equal number of white but otherwise identical spheres are placed, let 20 purely random drawings be made. The case that only black spheres are drawn is not a hair less probable than the case that on the first draw one gets a black sphere, on the second a white, on the third a black, etc. The fact that one is more likely to get 10 black spheres and 10 white spheres in 20 drawings than one is to get 20 black spheres is due to the fact that the former event can come about in many more ways than the latter. The relative probability of the former event as compared to the latter is the number 20!/10!10!, which indicates how many permutations one can make of the terms in the series of 10 white and 10 black spheres, treating the different white spheres as identical, and the different black spheres as identical. Each one of these permutations represents an event that has the same probability as the event of all black spheres.”

– – – –

By analyzing the ways in which systems of particles distribute themselves, and the various constraints to which particle assemblies are subject, important links came to be established between the statistical properties of assemblies and their bulk thermodynamic properties.

Boltzmann’s contribution in this regard is famously commemorated in the formula inscribed on his tombstone: S = k log W. There is powerful new thinking in this equation. While the classical thermodynamic definition of entropy by Rankine and Clausius was expressed in terms of temperature and heat exchange, Boltzmann gave entropy – and its tendency to increase in natural processes – a new explanation in terms of probability. If a particle system is not in its most probable state then it will change until it is, and an equilibrium state is reached.

– – – –

P Mander April 2016

CarnotCycle is a thermodynamics blog but occasionally its enthusiasm spills over into other subjects, as is the case here.
– – – –

When one considers the great achievements in radioactivity research made at the start of the 20th century by Ernest Rutherford and his team at the Victoria University, Manchester it seems surprising how little progress they made in finding an answer to the question posed above.

They knew that radioactivity was unaffected by any agency applied to it (even temperatures as low as 20K), and since the radioactive decay law discovered in 1902 by Rutherford and Soddy was an exponential function associated with probabilistic behavior, it was reasonable to think that radioactivity might be a random process. Egon von Schweidler’s work pointed firmly in this direction, and the Geiger-Nuttall relation, formulated by Hans Geiger and John Nuttall at the Manchester laboratory in 1911 and reformulated in 1912 by Richard Swinne in Germany, laid a mathematical foundation on which to construct ideas. Yet despite these pointers, Rutherford wrote in 1912 that “it is difficult to offer any explanation of the causes operating which lead to the ultimate disintegration of the atom”.

The phrase “causes operating which lead to” indicates that Rutherford saw the solution in terms of cause and effect. Understandably so, since he came from an age where probability was regarded as a measure of uncertainty about exact cause, rather than something reflecting a naturally indeterministic process. C.P. Snow once said of Rutherford, “He thought of atoms as though they were tennis balls”. And therein lay the essence of his problem: he didn’t have the right kind of mind to answer this kind of question.

But someone else did, namely the pioneer who introduced the term radioactivity and gave it a quantifiable meaning – Maria Sklodowska, better known under her married name Marie Curie.

– – – –

Mme. Curie’s idea

The 2nd Solvay Conference (1913) La structure de la matière (The structure of matter)

When all the great men of science (and one woman) convened for the second Solvay Conference in 1913, the hot topic of the day was the structure of the atom. Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab had recently conducted their famous particle scattering experiment, enabling Rutherford to construct a model of the atom with a central nucleus where its positive charge and most of its mass were concentrated. Rutherford and his student Thomas Royds had earlier conducted their celebrated experiment which identified the alpha particle as a helium nucleus, so the attention now focused on trying to explain the process of alpha decay.

It was Marie Curie who produced the most fruitful idea, foreshadowing the quantum mechanical interpretation developed in the 1920s. Curie suggested that alpha decay could be likened to a particle bouncing around inside a box with a small hole through which the particle could escape. This would constitute a random event; with a large number of boxes these events would follow the laws of probability, even though the model was conceptually based on simple kinetics.

Now it just so happened that a probability distribution based on exactly this kind of random event had already been described in an academic paper, published in 1837 and rather curiously entitled Recherches sur la probabilité des jugements en matière criminelle et matière civile (Research on the probability of judgments in criminal and civil cases). The author was the French mathematician Siméon Denis Poisson (1781-1840).

– – – –

The Poisson distribution

At the age of 57, just three years before his death, Poisson turned his attention to the subject of court judgements, and in particular to miscarriages of justice. In probabilistic terms, Poisson was considering a large number of trials (excuse the pun) involving just two outcomes – a correct or an incorrect judgement. And with many years of court history on the public record, Poisson had the means to compute a time-averaged figure for the thankfully rare judicial failures.

In his 1837 paper Poisson constructed a model which regarded an incorrect judgement as a random event which did not influence any other subsequent judgement – in other words it was an independent random event. He was thus dealing with a random variable in the context of a binomial experiment with a large number of trials (n) and a small probability (p), whose product (pn) he asserted was finite and equal to µ, the mean number of events occurring in a given number of dimensional units (in this case, time).

In summary, Poisson started with the binomial probability distribution

where p is the probability of success and q is the probability of failure, in which successive terms of the binomial expansion give the probability of the event occurring exactly r times in n trials

Asserting µ = pn, he evaluated P(r) as n goes to infinity and found that

This is the general representation of each term in the Poisson probability distribution

which can be seen from

As indicated above, the mean µ is the product of the mean per unit dimension and the number of dimensional units. In the case of radioactivity, µ = λt where λ is the decay constant and t is the number of time units

If we set t equal to the half-life t½ the mean µ will be λt½ = ln 2. Mapping probabilities for the first few terms of the distribution yields

Unlike the binomial distribution, the Poisson distribution is not symmetric; the maximum does not correspond to the mean. In the case of µ = ln2 the probability of no decays (r = 0) is exactly a half, as can be seen from

At this point we turn to another concept introduced by Poisson in his paper which was taken further by the Russian mathematician P.L. Chebyshev – namely the law of large numbers. In essence, this law says that if the probability of an event is p, the average number of occurrences of the event approaches p as the number of independent trials increases.

In the case of radioactive decay, the number of independent trials (atoms) is extremely large: a µg sample of Cesium 137 for example will contain around 10^15 nuclei. In the case of µ = ln2 the law of large numbers means that the average number of atoms remaining intact after the half-life period will be half the number of atoms originally present in the sample.

The Poisson distribution correctly accounts for half-life behavior, and has been successfully applied to counting rate experiments and particle scattering. There is thus a body of evidence to support the notion that radioactive decay is a random event to which the law of large numbers applies, and is therefore not a phenomenon that requires explanation in terms of cause and effect.

– – – –

Geiger and Nuttall

Despite Ernest Rutherford’s protestations that atomic disintegration defied explanation, it was in fact Rutherford who took the first step along the path that would eventually lead to a quantum mechanical explanation of α-decay. In 1911 and again in 1912, Rutherford communicated papers by two of his Manchester co-workers, physics lecturer Hans Geiger (of Geiger counter fame) and John Nuttall, a graduate student.

Rutherford’s team at the Physical Laboratories was well advanced with identifying radioactive decay products, several of which were α-emitters. It had been noticed that α-emitters with more rapid decay rates had greater α-particle ranges. Geiger and Nuttall investigated this phenomenon, and when they plotted the logarithms of the decay constants (they called them transformation constants) against the logarithms of the corresponding α-particle ranges for decay products in the uranium and actinium series they got this result (taken from their 1911 paper):

This implies the existence of a relationship log λ = A + B log R, where A has a characteristic value for each series and B has the same value for both series. Curiously, Geiger and Nuttall did not express the straight lines in mathematical terms in either of their papers; they were more interested in using the lines to calculate the immeasurably short periods of long-range α-emitters. But they made reference in their 1912 paper to somebody who had “recently shown that the relation between range and transformation constant can be expressed in another form”.

That somebody was the German physicist Richard Swinne (1885-1939) who sent a paper entitled Über einige zwischen den radioaktiven Elementen bestehene Beziehungen (On some relationships between the radioactive elements) to Physikalische Zeitschrift, which the journal received on Tuesday 5th December 1911 and published in volume XIII, 1912.

The other form that Swinne had found, which he claimed to represent the experimental data at least as well as the (unstated) formula of Geiger and Nuttall, was log λ = a + bv^n, where a and b are constants and v is the particle velocity.

When it came to n, Swinne was rangefinding: he tried various values of n and found that “n kann am besten gleich 1 gesetzt werden”; he was thus edging towards what we now call the Geiger-Nuttall law, namely that the logarithm of the α-emitter’s half-life is inversely proportional to the square root of the α-particle’s kinetic energy

– – – –

Gurney and Condon, and Gamow

The potential well diagram in Gurney and Condon’s article

In 1924, the British mathematician Harold Jeffreys developed a general method of approximating solutions to linear, second-order differential equations. This method, rediscovered as the WKB approximation in 1926, was applied to the Schrödinger equation first published in that year and resulted in the discovery of the phenomenon known as quantum tunneling.

It was this strange effect, by which a particle with insufficient energy to surmount a potential barrier can effectively tunnel through it (the dotted line DB) that was seized upon in 1928 by Ronald Gurney and Edward Condon at Princeton – and independently by George Gamow at Göttingen – as a way of explaining alpha decay. Gurney and Condon’s explanation of alpha emission was published in Nature in an article entitled Wave Mechanics and Radioactive Disintegration, while Gamow’s considerably more academic (and mathematical) paper Zur Quantentheorie des Atomkernes was published in Zeitschrift für Physik.

In the quantum mechanical treatment, the overall rate of emission (i.e. the decay constant λ) is the product of a frequency factor f – the rate at which an alpha particle appears at the inside wall of the nucleus – multiplied by a transmission coefficient T, which is the (independent) probability that the alpha particle tunnels through the barrier. Thus

At this point it is instructive to recall Marie Curie’s particle-in-a-box idea, a concept which involves the product of two quantities: a large number of escape attempts and a small probability of escape.

The frequency factor f – or escape attempt rate – is estimated as the particle velocity v divided by the distance across the nucleus (2R) where R is the radius

Here, V0 is the potential well depth, Qα is the alpha particle kinetic energy and µ is the reduced mass. The escape attempt rate is quite large, usually of the order of 1021 per second. By contrast the probability of alpha particle escape is extremely small. In calculating a value for T, Gamow introduced the Gamow factor 2G where

Typically the Gamow factor is very large (2G = 60-120) which makes T very small (T = 10-55-10-27).

Combining the equations

or

which is the Geiger-Nuttall law.

The work of Gurney, Condon and Gamow provided a convincing theoretical explanation of the Geiger-Nuttall law on the basis of quantum mechanics and Marie Curie’s hunch, and put an end to the classical notions of Rutherford’s generation that radioactive decay required explanation in terms of cause and effect.

So to return to the question posed at the head of this post – What determines the moment at which a radioactive atom decays? – the answer is chance. And the law of large numbers.

– – – –

An important consequence

George Gamow and John Cockroft

The successful application of quantum tunneling to alpha particle emission had an important consequence, since it suggested to Gamow that the same idea could be applied in reverse i.e. that projectile particles with lower energy might be able to penetrate the nucleus through quantum tunneling. This led Gamow to suggest to John Cockroft, who was conducting atom-smashing experiments with protons, that protons with more moderate speeds could be used. Gamow’s suggestion proved correct, and the success of these trials ushered in a new era of intensive development in nuclear physics.

– – – –

Links to original papers mentioned in this post

G. Gamow (1928) Zur Quantentheorie des Atomkernes, Zeitschrift für Physik; 51: 204-212

H. Geiger and J.M. Nuttall (1911) The ranges of the α particles from various radioactive substances and a relation between range and period of transformation, Phil Mag; 22: 613-621
https://archive.org/stream/londonedinburg6221911lond#page/612/mode/2up

H. Geiger and J.M. Nuttall (1912) The ranges of α particles from uranium, Phil Mag; 23: 439-445
https://archive.org/stream/londonedinburg6231912lond#page/438/mode/2up

R.W. Gurney and E.U. Condon (1928) Wave Mechanics and Radioactive disintegration, Nature; 122 (Sept. 22): 439
https://www.nature.com/articles/122439a0

R. Swinne (1912) Über einige zwischen den radioaktiven Elementen bestehene Beziehungen, Physikalische Zeitschrift; XIII: 14-21

– – – –

P Mander August 2017