Understanding Uncertainty - league-tables
https://understandinguncertainty.org/taxonomy/term/191
enAre the Brits really fatter than other Europeans?
https://understandinguncertainty.org/are-brits-really-fatter-other-europeans
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Lots of press reports in the last couple of days on how UK women are the fattest in Europe, for example in <a href="http://www.dailymail.co.uk/health/article-2066060/British-women-fattest-Europe-quarter-classed-obese.html">the Daily Mail</a> and on the <a href="http://www.bbc.co.uk/news/health-15901351">BBC News website</a>. I'm still in Berlin, and it was in the papers here too. The tabloid-style <a href="http://www.berliner-kurier.de/gesundheit/fett-liste-der-europaeer-mann--sind-die-dick--mann,7168818,11222942.html"><em>Berliner Kurier</em></a> went with the headline "Man, they are fat, man", while the <a href="http://www.n24.de/news/newsitem_7443830.html">N24 news service</a> went with "British and Maltese are the fattest Europeans". But is it another dodgy league table?</p>
<!--break--><p>
Well, yes, though for different reasons from those we've looked at <a href="http://understandinguncertainty.org/another-doubtful-league-table">here</a> or <a href="http://understandinguncertainty.org/lottery-league-tables">here</a>. And probably we shouldn't be blaming the reporters, because they made a tolerably good job of reproducing data in the <a href="http://epp.eurostat.ec.europa.eu/cache/ITY_PUBLIC/3-24112011-BP/EN/3-24112011-BP-EN.PDF">press release from Eurostat</a>, the EU statistics office, that they were reporting on. It does really say that 23.9% of UK women are obese (body mass index (BMI) over 30), and this is indeed the highest percentage out of all 19 countries they looked at. The figure for men is not much lower, 22.1%, and this is indeed the second highest (a bit less than Malta)...</p>
<p>...except there's some quite important small print in the press release. First, it points out that the data aren't for the whole of the UK at all, but just for England. The BBC article mentioned this, correctly pointing out that things aren't any better in terms of obesity in the other countries of the UK either. Then, Eurostat says in the same footnote that for England, "adult" means 16 and over, whereas it's 18 or over in all the other countries. Actually that's not going to make much difference to the percentages either, but it made me wonder <em>why</em> it should be different. The press release says that the data are from the European Health Interview Survey (EHIS), published by Eurostat, and that the EHIS aims to measure various things across the EU "on a harmonised basis". Using different age groupings in different countries doesn't sound very harmonised to me. What's going on?</p>
<p>There's another footnote in the press release, about the EHIS. All that does is provide a link to <a href="http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Overweight_and_obesity_-_BMI_statistics">another article on the Eurostat site</a> - but this one helpfully explains that the English data don't come from the EHIS survey at all, they come from the <a href="http://www.ic.nhs.uk/statistics-and-data-collections/health-and-lifestyles-related-surveys/health-survey-for-england">Health Survey for England</a>. (The Italian data aren't from EHIS either.)</p>
<p>Now the Health Survey for England isn't like many routine health surveys. Participants aren't just <em>asked</em> questions about their health and so on. If they agree, the interviewer actually measures their height and weight, and indeed a nurse comes and takes several other physical measurements of various kinds. It's these measurements that are used to work out the body mass index, and hence to provide data on how many people are overweight or obese.</p>
<p>In contrast, the EHIS survey (and indeed the survey that was used to provided data for Italy) did not take actual measurements, but simply asked people their height and weight.</p>
<p>This matters. The scales used in the English survey don't lie - they might not be utterly accurate in every way, but compared to asking people their weight and height, there are unlikely to be major biases. People do not always know exactly how much they weigh, and even if they do, they might tell the interviewer a rather smaller weight, or perhaps add a centimetre or two to their height. That would bring down the person's body mass index, and if it happens systematically, it will introduce a bias into the obesity figures. Furthermore, it's likely that the amount of bias will be different for men and women, and for different age groups, and possibly for different countries too. And, without having data from surveys using physical measurements in all the other countries, we don't even know how big these biases are likely to be.</p>
<p>So the figures for the other 18 countries aren't really comparable with the English figures. There are biases, but we don't know how big. The Eurostat data say, for instance, that a very alarming 16.6% of young English women (aged 16-24) are obese, far greater than the figures (actually for ages 18-24) for all the other countries listed. (Malta is next with 10.7%, and most of the others are well below 5%.) Is this because English young women are really so much fatter, on average, than women elsewhere in Europe, or are young women more likely than, say, old men to knock a kilo or two off when an interviewer asks their weight?</p>
<p>This isn't really very comforting to us Brits, though. The percentages that are overweight are still scarily high, and the knowledge that, possibly, we're not top of this league after all, doesn't bring our percentages down. It's the other countries' figures that are likely to be subject to bias, not ours. Maybe the Maltese and the Latvians really are heavier than us, on average - but if so, that's because they are heavier than the survey said they were, not because we are any lighter. (And I really shouldn't conclude anything at all from my observation that the people I see on the street here in Berlin do seem less likely to be really large than the people I see on the street back in the UK. That observation, is likely to be even more biased!)</p>
<p>Moral: it's worth reading the small print in press releases, even if they come from a respectable statistical agency. Things might not be as simple as they seem.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div><div class="field-item odd"><a href="/free-tags/body-mass-index">body mass index</a></div><div class="field-item even"><a href="/free-tags/eurostat">Eurostat</a></div><div class="field-item odd"><a href="/taxonomy/term/205">obesity</a></div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-1">level 1</a></div></div></div>Sun, 27 Nov 2011 18:31:20 +0000kevin3222 at https://understandinguncertainty.orghttps://understandinguncertainty.org/are-brits-really-fatter-other-europeans#commentsComparing hospitals
https://understandinguncertainty.org/node/1297
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>DJS, Times, 30th November 2009 </p>
<p>Will you be safe in the hands of the St Helens and Knowsley Hospitals NHS Trust? Well it depends what you read. If you consult the latest Dr Foster Hospital Guide, apparently you will be in one of England’s most unsafe hospitals. But the website of the official NHS regulator, the Care Quality Commission (CQC), says your hospital is rated as “Excellent” for Quality of Services. I’m a statistician whose methods are used by both Dr Foster and the CQC, and I’m confused, so heaven help the poor patients in St Helens. </p>
<p>Hospitals are not football teams that can easily be ranked in a league table, and measuring safety is complex and open to manipulation. That great statistician, Florence Nightingale, returned from the Crimea 150 years ago and instituted the first comparative audit of deaths in London hospitals, but in 1863 she wrote resignedly “we have known incurable cases discharged from one hospital, to which the deaths ought to have been accounted, and received into another hospital, to die there in a day or two after admission, thereby lowering the mortality rate of the first at the expense of the second”. </p>
<p>But how in modern times can two organisations come up with such different conclusions? The CQC’s rating depends partly on meeting targets which, whether you like them or not, are at least fairly measurable, but the “Excellent” for St Helens also means compliance with ‘core standards’ set by the Department of Health. These include, for example, the eloquent safety standard C01b (take a deep breath) “Healthcare organisations protect patients through systems that ensure that patient safety notices, alerts and other communications concerning patient safety which require action are acted upon within required timescales”. </p>
<p>Three thoughts spring to mind. First, who writes this stuff? Second, this is a measure of organisational process, and we have no idea whether it will prevent any actual accidents. Third, hospitals self-assess their compliance with these standards, just like a tax self-assessment form. It’s then up to the CQC to cross-check the claim against relevant bits of a vast mass of routine data, including patient complaints - the 10% of trusts which are found to be at most risk of ‘undeclared non-compliance’ (fibbing, in normal language) then get inspected. A random selection of hospitals gets inspected as well, and those that are caught out get ‘fined’ rating points.</p>
<p>It’s rather remarkable that this country has led the world in introducing an automated risk-based inspection for hospitals, similar to the way that the Inland Revenue screen tax self-assessments. But just as light-touch regulation of the financial world has for obvious reasons got itself a bad name, there is now likely to be a change in regime for hospitals. </p>
<p>In contrast to CQC, Dr Foster don’t do inspections and use few measures of process – their ratings are mainly driven by statistics. In particular, 6 of the 13 safety indicators concern death rates, in which the observed numbers of deaths are compared to the number that would be expected given the type of patients being treated.</p>
<p>Simply counting the bodies at first seems the obvious way to measure hospital quality. Certainly some dramatic improvements in death rates have been reported from hospitals in the news: Mid-Staffordshire has gone from 27% excess mortality in 2007 to 8% savings in deaths in 2008, while Basildon and Thurrock had 31% excess in the year up to March 2009 but now claim to be average. Maybe these hospitals really have suddenly started saving a miraculous number of lives. But in-hospital standardised mortality rates might also be lowered, quite appropriately, by accurate use of the code ‘admitted for palliative care’ (which increases the expected number of deaths), and sensitive movement of some terminally-ill patients to die out of hospital. We do not have to be as sceptical as Nightingale to realise that death rates are more malleable than we might think, and are a very blunt instrument for measuring quality.</p>
<p>Dr Foster and CQC essentially get different ratings because they choose different indicators to put into a summary formula. But does it make any sense to produce a single rating for a complex institution like a hospital? Just like two football teams that are one point apart in the championship, the result can be swayed by trivial events: a few years ago my local Cambridge hospital dramatically dropped from 3 stars to 2 stars under the old star-rating system, and forensic analysis revealed this was due to just 4 too few junior doctors out of 400 being signed up to the New Deal in working hours. </p>
<p>It’s clear that naming, blaming and shaming gets headlines, which produces urgency and attention in hospital board rooms, and will have contributed to the little-reported 60% fall in both MRSA and C Difficile rates over the last 2 years. But trying to produce a single measure of ‘quality’ will inevitably lead to the sort of contradictions we’ve seen last week. </p>
<p>Anyway it’s all up in the air now. From next April each hospital will have to release its own ‘Quality Account’ that reports on local priorities for improvement – fine for local accountability, but someone also has to be making national comparisons and rapidly detect safety lapses using current centralised information. Doubtless new inspection methods will be developed by CQC: pre-announced formal inspections encourage as much careful preparation as royal visits, and so we might expect more roaming gangs of unannounced inspectors. </p>
<p>And the patients at St Helens need not worry: closer examination reveals that their low rating by Dr Foster is largely driven by some missing data on safety reporting. But nobody reading the headlines will have realised this. The CQC is no longer legally obliged to publish an overall rating, so let’s hope we can get away from over-simplistic and unjust league tables. </p>
<p>David Spiegelhalter is Winton Professor of the Public Understanding of Risk at the University of Cambridge. He has collaborated on statistical methods that are used by both the Care Quality Commission and Dr Foster Intelligence. </p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div></div></div>Sat, 07 May 2011 19:09:55 +0000david1297 at https://understandinguncertainty.orgCanadian National Lottery Animations
https://understandinguncertainty.org/node/251
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Canadian Lottery animations based on Lotto649 data. </p>
<!--break--><p>
<a href="/files/LottoCaFixed2.swf" target="_blank">Full Screen Version</a></p>
<div id="flashcontent0">
</div>
<script type="text/javascript">
<!--//--><![CDATA[// ><!--
var so = new SWFObject("/files/LottoCaFixed2.swf", "/files/LottoCaFixed2.swf", "550", "430", 8, "#FFFFFF");
so.write("flashcontent0");
//--><!]]>
</script><p>
<a href="/files/LottoRunsCa.swf" target="_blank">Full Screen Version</a></p>
<div id="flashcontent1">
</div>
<script type="text/javascript">
<!--//--><![CDATA[// ><!--
var so = new SWFObject("/files/LottoRunsCa.swf", "/files/LottoRunsCa.swf", "550", "430", 8, "#FFFFFF");
so.write("flashcontent1");
//--><!]]>
</script></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-1">level 1</a></div></div></div><div class="field field-name-upload field-type-file field-label-hidden"><div class="field-items"><div class="field-item even"><table class="sticky-enabled">
<thead><tr><th>Attachment</th><th>Size</th> </tr></thead>
<tbody>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="File" title="application/x-shockwave-flash" src="/modules/file/icons/application-octet-stream.png" /> <a href="https://understandinguncertainty.org/sites/understandinguncertainty.org/files/files/LottoRunsCa.swf" type="application/x-shockwave-flash; length=95992" title="LottoRunsCa.swf">LottoRunsCa.swf</a></span></td><td>93.74 KB</td> </tr>
<tr class="even"><td><span class="file"><img class="file-icon" alt="File" title="application/x-shockwave-flash" src="/modules/file/icons/application-octet-stream.png" /> <a href="https://understandinguncertainty.org/sites/understandinguncertainty.org/files/files/LottoCaFixed.swf" type="application/x-shockwave-flash; length=94927" title="LottoCaFixed.swf">LottoCaFixed.swf</a></span></td><td>92.7 KB</td> </tr>
<tr class="odd"><td><span class="file"><img class="file-icon" alt="File" title="application/x-shockwave-flash" src="/modules/file/icons/application-octet-stream.png" /> <a href="https://understandinguncertainty.org/sites/understandinguncertainty.org/files/files/LottoCaFixed2.swf" type="application/x-shockwave-flash; length=94929" title="LottoCaFixed2.swf">LottoCaFixed2.swf</a></span></td><td>92.7 KB</td> </tr>
</tbody>
</table>
</div></div></div>Sun, 24 May 2009 15:29:59 +0000gmp26251 at https://understandinguncertainty.orgAdditional Tag Page 3
https://understandinguncertainty.org/node/147
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/102">lottery</a></div><div class="field-item odd"><a href="/taxonomy/term/115">media</a></div><div class="field-item even"><a href="/taxonomy/term/183">health</a></div><div class="field-item odd"><a href="/taxonomy/term/184">health-safety</a></div><div class="field-item even"><a href="/taxonomy/term/185">heart</a></div><div class="field-item odd"><a href="/taxonomy/term/186">ID-cards</a></div><div class="field-item even"><a href="/taxonomy/term/187">inference</a></div><div class="field-item odd"><a href="/taxonomy/term/188">insurance</a></div><div class="field-item even"><a href="/taxonomy/term/189">journals</a></div><div class="field-item odd"><a href="/taxonomy/term/190">journals-online</a></div><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div><div class="field-item odd"><a href="/taxonomy/term/192">left-handed-survival</a></div><div class="field-item even"><a href="/taxonomy/term/193">legal</a></div><div class="field-item odd"><a href="/taxonomy/term/194">long-tail</a></div><div class="field-item even"><a href="/taxonomy/term/195">measurement-uncertainty</a></div><div class="field-item odd"><a href="/taxonomy/term/196">mobile-phones</a></div><div class="field-item even"><a href="/taxonomy/term/197">monty-hall</a></div><div class="field-item odd"><a href="/taxonomy/term/198">multi-issues</a></div><div class="field-item even"><a href="/taxonomy/term/199">new-years</a></div><div class="field-item odd"><a href="/taxonomy/term/200">news</a></div><div class="field-item even"><a href="/taxonomy/term/201">newspaper</a></div></div></div>Thu, 01 May 2008 17:03:16 +0000arciris147 at https://understandinguncertainty.orgMay the best team win
https://understandinguncertainty.org/node/61
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="/node/56" title="Luck or skill?" class="level2">Luck or skill?</a> suggested using the average number of points per match as an estimate of an underlying quality measure, leading to confidence intervals and uncertainties about ranks. Here we discuss a simple mathematical model that gave rise to those results.</p>
<!--break--><h2>What distribution of points would you get by chance alone?</h2>
<p>Suppose all the teams were of equal standard, so that each match resulted in a home win with probability $p_H$, a draw with probability $p_D$, and an away win with probability $p_A$, where $p_H + p_D+ p_A = 1$. Let $X_H$ be the number of points a team wins in a home match, Then we can calculate the mean and variance of $X_H$ to be<br />
$$m_H = E(X_H) = 3p_H + p_D,$$<br />
$$v_H = V(X_H) = E(X_H^2) - m_H^2 = 9p_H + p_D -m_H^2.$$<br />
Similarly, if $X_A$ is the number of points a team wins in an away match, then<br />
$$m_A = E(X_A) = 3p_A + p_D,$$<br />
$$v_A = V(X_A) = E(X_A^2) - m_A^2 = 9p_A + p_D -m_A^2.$$</p>
<p>If in a season there are $N/2$ home and $N/2$ away matches, then the total points $T$ at the end of the season is the sum of all the points in the individual matches, and so has mean and variance<br />
$$m_T = E(T) = \frac{N}{2} ( m_H+m_A) = \frac{N}{2} \left( 3(p_H+p_A) + 2p_D\right) = \frac{N}{2}(3-p_D),$$<br />
$$v_T = V(T) = \frac{N}{2} ( v_H+v_A) = \frac{N}{2}\left( 9(p_H+p_A) + 2p_D - m_H^2 -m_A^2 \right) = \frac{N}{2}(9-7p_D - m_H^2 -m_A^2).$$<br />
For the observed proportions $p_H = 0.48, p_D = 0.26, p_A =0.26$, we obtain $m_T = 52.1, v_T= 61$.</p>
<p>The variance of the actual league-points at the end of the season was 239, compared with the theoretical variance of 61 were the teams all of equal quality and the results of the matches essentially due to chance. Since 61 / 239 = 26, we conclude that 26% of the variance in the Premier league points is due to chance. The standard deviation of the observed points, which is the square root of the variance, is roughly equal to 15, while that of the random points is around 8. This means that the observed points have about twice the range of the random points, so about half the spread of points is due to chance alone.</p>
<h2>How sure can we be about the true quality of each team?</h2>
<p>The end-of-season point-total describes how well each team has performed, but could also be viewed as an estimate of some measure of underlying quality. This is perhaps best seen by dividing the total by 38 to get 'mean points per game'. Their final score on this scale can be viewed as an estimate of a (theoretical) quantity: the mean points they would score per game were the season to continue indefinitely. By estimating the variance of this estimator and making some broad assumptions based on the <em>central limit theorem</em>, we can place a confidence interval around the observed mean points per game which expresses our uncertainty about the true underlying 'quality' of each team.</p>
<p>We therefore repeat the above analysis for individual teams, using the breakdown of their results at the end of the season to estimate their home-win / draw /away-win probabilities. For example, when Arsenal played at home they won 12, drew 6, and lost 1, and when playing away they won 7, drew 5, and lost 7. This total of 19 wins and 11 draws gave them a mean of 68/38 = 1.79 points per game, which we shall label as $m$. The variance of the home points is 109.1 , the variance of the away points is 66.1, so the total variance is $175.2$, which leads to an estimated variance of the mean-points-per-game of $v = 175.2/38^2 = 0.12$ . This can be used to place a confidence interval around $m$ with bounds $m \pm 1.96*\sqrt{v}$, giving an interval of 1.11 to 2.47 . The intervals for all the teams are shown below.</p>
<p><img src="/files/premier-confints_0.png" width="500" height="480" alt="premier league confidence intervals" class="imgCentre" /></p>
<h2>How sure can we be about the appropriate rank of each team?</h2>
<p>Once we take the final mean number of points per game as an estimate of an unknown 'quality' measure, it then becomes reasonable to view the observed rank of the team in the league table as an estimate of the true underlying rank of the team. Intervals expressing our uncertainty of this true rank are easiest to obtain by simulation: essentially each interval is treated as representing a Normal distribution, and then at each iteration of the simulation a value is drawn from each distribution, and the values for the 20 teams are ranked and the rank of each team recorded. Repeating this process for 1000 simulations gives a distribution over the plausible ranks for each team, which are summarised by the 95% intervals shown below.</p>
<p><img src="/files/premier-ranks.png" width="500" height="480" alt="premier league ranks" class="imgCentre" /></p>
<p>These ranking techniques can be adapted to assess the chances of results of specific matches, generally predicting actual scores rather than win/lose/draw directly. Naturally these ideas are widely used by sports betting organisations.</p>
<h2>Further reading and links</h2>
<p>Marshall and Spiegelhalter (1998) <bib>Marshall1998</bib> give a tutorial on using these techniques for ranking the success rates of IVF clinics.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-3">level 3</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item odd"><a href="/taxonomy/term/106">confidence</a></div><div class="field-item even"><a href="/taxonomy/term/107">variance</a></div><div class="field-item odd"><a href="/taxonomy/term/179">football</a></div><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div><div class="field-item odd"><a href="/taxonomy/term/230">sport</a></div></div></div>Mon, 03 Dec 2007 11:47:05 +0000david61 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/61#commentsLuck or skill?
https://understandinguncertainty.org/node/56
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="/node/45" title="Premier League" class="level1">Premier League</a> showed how the points in the Premier League table developed over the 2006-2007 season, but what distribution of points would you get by chance alone?</p>
<!--break--><p>Here is an animation of the 2006-2007 season.</p>
<div id="flashcontent0">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/Premier0607.swf", "/files/Premier0607.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent0");
</script><p><a href="/files/Premier0607.swf">Click to enlarge the animation</a></p>
<p>Click on <em>show theoretical distribution</em> to see a theoretical distribution for the points if all the games were decided by chance alone. What does this mean? Overall, 48% of matches are home wins, 26% draws, and 26% away wins - we call this the "48/26/26" law. Suppose all the teams were indistinguishable in their skills (which might happen in the unlikely case that before each weekend, players for each team were selected at random from all the players in the Premier League). We can then calculate the expected point distribution that would happen if all matches were decided according to the 48/26/26 law.</p>
<p> At the end of the season some teams are clearly outside the theoretical distribution were all the teams the same, and so we can conclude there are genuine differences between the teams. However a certain amount of the spread of the final points is explainable by chance alone, - in <a href="/node/61" title="May the best team win" class="level3"><em>May the best team win</em> </a> we show that around half the spread of points is due to chance.</p>
<h2>How sure can we be about the true quality of each team?</h2>
<p>The final ranking depends on the total number of points, or equivalently the <em>average number of points per game</em>. We might use this as a measure of the underlying 'quality' of a team, and treat it as some true underlying characteristic that we are trying to estimate based on a limited sample of 38 games. Using some statistical techniques described in <a href="/node/61" title="May the best team win" class="level3"><em>May the best team win</em> </a> we can derive 95% confidence intervals for the true underlying 'quality' - we might think of 'average number of points per game' that would be achieved were the season to continue indefinitely (and all the teams stay at a constant level of skill).</p>
<p><img src="/files/premier-confints_0.png" width="500" height="480" alt="premier league confidence intervals" class="imgCentre" /></p>
<p>The confidence intervals above show that only Manchester United and Chelsea can be reasonably claimed to be better than average, while we can be confident that Charlton and Watford are below average.</p>
<h2>How sure can we be about the appropriate rank of each team?</h2>
<p>We can go further and assess the plausible 'true rank' of each team, in terms of their underlying quality as measured by average points per game. <a href="/node/61" title="May the best team win" class="level3"><em>May the best team win</em> </a> shows how to do this.</p>
<p><img src="/files/premier-ranks.png" width="500" height="480" alt="premier league ranks" class="imgCentre" /></p>
<p>There is huge uncertainty as to the true ranks of the teams: this is typical of many applications of league tables. Manchester United and Chelsea again come up as the only teams we can be reasonably sure are in the top half, while only Watford cn be confidently placed in the bottom half.</p>
<p>We can also consider the probability that the season's winner, Manchester United, really was the best team: we assess this to be 53%, compared to 31% for Chelsea. This could be interpreted as the probability that Chelsea would actually end up top of the league table were the season to continue indefinitely.</p>
<p>Were the teams that were relegated really the three worst teams? We assess the probability of being in the bottom 3 for 'auality' as 77% for Watford, 47% for Charlton Athletic, and 30% for Sheffield United. Wigan and Fulham narrowly escaped relegation, and in fact we assess for each a 28% probability of truly being in the bottom 3 teams.</p>
<h2>Further reading and links</h2>
<p>Alan Lee has carried out a similar ranking analysis in <em>Modeling Scores in the Premier League: Is Manchester United Really the Best?</em> in <a href="http://www.amazon.co.uk/exec/obidos/ASIN/0898715873" title="Statistics in Sports" class="external">Anthology of Statistics in Sports</a></p>
<p> <em><a href="http://www.amazon.co.uk/Beating-Odds-Hidden-Mathematics-Sport/dp/1905798121/ref=sr_1_3?ie=UTF8&s=books&qid=1196681524&sr=1-3" title="beating the odds -amazon" class="external">Beating the Odds: The Hidden Mathematics of Sport</a></em> by Rob Eastaway and John Haigh covers, among other topics, football league tables and comparisons with simulations.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-2">level 2</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item odd"><a href="/taxonomy/term/179">football</a></div><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div></div></div>Wed, 28 Nov 2007 16:18:21 +0000david56 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/56#commentsFootball Leagues
https://understandinguncertainty.org/node/45
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><img src="/files/100px/icon-cartoon-man-football.jpg" width="100" height="100" alt="cartoon football player" class="article-icon" />The Premier League is the main English football league, with 20 teams each playing a home and away against each other team making, 38 matches for each team in a season, and 380 matches altogether. Teams are awarded 3 points for a win, 1 for a draw, 0 for losing, and the league position is decided on total points, with equal points decided by goal difference (goals for minus goals against). At the end of the season the bottom 3 teams are relegated.</p>
<!--break--><p>
We can use the history of the Premier League table over the 2006-2007 season to see how the spread of points develops.</p>
<p>The animation below shows how often each of the teams performed up in the 380 games played in the 2006-2007 season. Click on the image to start and click on the fast forward button to speed up the animation. Click on <em>sort </em>to rank the teams by point score.</p>
<!--break--><div id="flashcontent0">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/Premier0607.swf", "/files/Premier0607.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent0");
</script><p><a href="/files/Premier0607.swf">Click to enlarge the animation</a></p>
<p>If you click on <em>show histogram</em>, you can see the current distribution of the total points for each team, and how it changes during the season. Check how two teams start to pull away from the rest, while other teams struggle near the bottom. But how much of this spread is due to luck and how much due to skill?</p>
<p><a href="/node/56" title="Luck or skill?" class="level2"><em>Luck or skill?</em> </a>considers what sort of distribution of counts we would expect if the teams really were all the same, and hence how much of the spread of points in the final league rankings can be explained by chance alone. We shall also assess the probability that the season's winner, Manchester United, really was the best team, and that the teams that were relegated (Watford, Charlton Athletic and Sheffield United) really were the three worst teams.</p>
<h2>The 2007-2008 season</h2>
<p>Here's an update to the animation showing the 2007-2008 season.</p>
<div id="flashcontent1">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/Premier0708.swf", "/files/Premier0708.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent1");
</script><p><a href="/files/Premier0708.swf">Click to enlarge the animation</a></p>
<h2>Further reading and links</h2>
<p>Spreadsheets with the full league history can be downloaded from the <a href="http://www.football-data.co.uk/englandm.php" title="Football league downloads" class="external">football-data.co.uk</a> website.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-1">level 1</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/3">animation</a></div><div class="field-item odd"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item even"><a href="/funstuff">Fun Stuff</a></div><div class="field-item odd"><a href="/taxonomy/term/179">football</a></div><div class="field-item even"><a href="/taxonomy/term/191">league-tables</a></div><div class="field-item odd"><a href="/taxonomy/term/230">sport</a></div></div></div>Fri, 16 Nov 2007 16:54:19 +0000david45 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/45#commentsIs the Lottery biased?
https://understandinguncertainty.org/node/41
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><img src="/files/images/LottoIcon3.jpg" width="100" height="100" alt="Chi Squared Scatter" class="imgLeft" />In <a href="/node/40" title="Lottery Expectations" class="level2"><em>Lottery Expectations</em></a> we looked at the observed and theoretical distributions for the total count of times each number has come up, and the gap between a number's appearances. Here we explain the mathematics behind the theoretical distribution of counts, and how to check for true randomness, and derive the theoretical distribution for gaps.</p>
<!--break--><h2>The distribution for the number of times each number has been drawn</h2>
<p>We first need to introduce some notation. Let the number of balls chosen at each draw be $m=6$, and the number of balls in the 'bag' be $M=49$. Each number between 1 and 49 therefore has a $p=m/M=6/49$ chance of being chosen at a particular draw. Therefore after $D$ draws, the total number of times each ball has been drawn has a <em>Binomial </em>distribution with parameters $p$ and $D$.</p>
<p>This distribution has mean $Dp$ and variance $Dp(1-p)$, and can be approximated by a Normal distribution with matching mean and variance. This is what is done in the animation.</p>
<h2>Testing for bias in the lottery</h2>
<p>There are many test statistics that are designed to identify different ways in which the lottery draws may not be entirely random, such as favouring odd or even numbers and so on <bib>Haigh1997</bib>. We consider the simplest possible: an adapted chi-square test.</p>
<p>After $D$ draws, we expect any particular number $j$ to have occurred $E_j = Dp = Dm/M$ times, which in the UK lottery corresponds to $6D/49 \approx D/8$ So, for example, after 1000 draws we would expect each number to have been chosen around 125 times. If after $D$ draws we add up the total number of times each number has occurred, and label these totals $O_1,...,O_{49}$, then a naive chi-squared statistic compares the observed and expected counts using the standard formula<br />
$$ X^2_{\rm naive} =\sum_{j=1}^{j=M} \frac{(O_j - E_j)^2}{E_j},$$<br />
which would be compared to a theoretical $\chi^2$ distribution with $M-1=48$ degrees of freedom. For those not familiar with chi-squared tests, this statistic will be large if the observed counts are very different from the expected, since then the numerators $(O_j - E_j)^2$ will be very big. However we would never expect the observed to exactly match the expected, due to chance variation, and it turns out that if the numbers really are drawn at random then the statistic should be approximately 48, if we assume all the balls being drawn were statistically independent.</p>
<p>However, as Haigh (1997) points out, this would only be the case if all $mD$ individual ball-draws were independent, which is not the case as 6 balls are drawn <em>without replacement</em> at each lottery-draw. Hence it is impossible, for example, for a particular number to be drawn as ball 2 and 6 within a single draw. This lack of independence requires an adjustment to the chi-squared statistic above, so that the correct statistic is<br />
$$ X^2 = \frac{(M-1)}{(M-m)}X^2_{\rm naive};$$<br />
hence the adjustment factor multiplies the naive chi-squared statistic by a factor 48/43 $\approx$ 1.12. This adjusted statistic is shown in the lottery animation.</p>
<p>If we group the lottery draws in sets of 50, we can plot the $X^2$ statistics for successive groups. This is shown below, with the lower and upper 2.5\% points of a $\chi^2_{48}$ distribution drawn in, respectively 30.8 and 69.0.</p>
<p> <img src="/files/images/lottery-chi-squared.png" width="500" height="467" alt="lottery - chi-squared series" class="imgCentre" /></p>
<p>We see that all the 24 statistics lie inside the central 95\% interval for the $\chi^2_{48}$ distribution.</p>
<h2>The distribution of gaps between a number's appearances</h2>
<p>Suppose a specific number $j$ has just been drawn. Then suppose that we label successive draws a ‘success’ if $j$ is drawn, a ‘failure’ otherwise: the chance of a ‘success’ is defined as $p$ which in this case is 6/49. Let $X$ be the number of failures before the first success, <em>i.e.</em> the ‘gap’ before $j$ is drawn again. The chance of a $X=0$ is the same as the chance of $j$ appearing in the next draw, which is $p = 6/49 \approx 0.12$. The chance of a gap of 1 is the same as the chance of a single 'failure' and then a 'success', which is $(1-p)p = 43/49 \times 6/49 = 0.11 $, and so on. Therefore the chance of $X$ taking on any particular value $x$ is the same as the chance of observing a series of $x$ ‘failures’ followed by a single ‘success’, so that<br />
$$ {\rm Pr}(X=x) = (1-p)^x p.$$<br />
This is the <a href="http://en.wikipedia.org/wiki/Geometric_distribution" title="Geometric distribution on Wikipedia" class="external">Geometric distribution</a>: note that sometimes this distribution is defined as the time until the first success, which here corresponds to $Y=X+1$. The mean of this distribution is $1/p - 1 = 49/6 – 1 = 7.16$, so the average gap length is around 7.</p>
<h2>The maximum gap in the whole lottery history</h2>
<p>We observed a maximum gap of 72 for ball number 17 between February and November 2000, which seems extraordinarily long. Is this surprising? </p>
<p>The chance of any particular gap being at least $x$, ie ${\rm Pr}(X \ge x)$, is simply the chance of observing $x$ failures in a row, so that<br />
$${\rm Pr}(X \ge x) = (1-p)^x.$$<br />
Therefore the chance of observing a gap as long as 72 is $(43/49)^{72}$ = 0.000082 , or around 1 in 12,500, which seems very rare indeed. If after number 17 was drawn in February 2000, we had specifically said 'let's wait until 17 appears again', then we would have been justifiably amazed at having to wait 73 draws until it did appear again, and might even suspect it had been left out of the bag! However we did not pre-specify this particular gap as being interesting, and simply chose it as the largest of 7440 observed gaps. Therefore a crude estimate of such a rare event occurring, when there are 7440 opportunities for it to occur, is $0.00082 \times 7440 = 0.61$. A more accurate estimate is obtained by noting that, if there are $n$ independent gaps,</p>
<p>$$\begin{array}{rl}{\rm Pr( maximum-gap }\ge x)&= 1- {\rm Pr( maximum-gap }
since the probability that all gaps are less than $x$ is the product of $n$ identical probabilities that a single gap is less than $x$. Hence we would estimate the probability of a maximum gap being at least 72 as $1 - (1 - 0.000082)^{7440} = 0.46$. This result suggests that 72 is not in the least surprising.</p>
<p>However, as with the $\chi^2$ statistic above, this distribution theory is not quite correct as there is some dependence between the gaps induced by there being exactly 6 numbers selected at each draw. We can therefore conduct a simulation with the results shown before and reproduced below.</p>
<p> <img src="/files/images/maxgaps_0.png" width="500" height="467" alt="distribution of maximum gaps" class="imgCentre" /></p>
<p>In 1000 simulations of 1240 draws, the mean largest gap was 72 , 154 was largest gap, 42% were 72 or more (showing the approximation assuming independence, 0.46, is quite accurate). Therefore our maximum gap of 72 is almost exactly what one would expect. </p>
<h2>Further reading and links</h2>
<p>This discussion is primarily based on Haigh (1997) whose notation we use.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-3">level 3</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/50">gambling</a></div><div class="field-item odd"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item even"><a href="/taxonomy/term/102">lottery</a></div><div class="field-item odd"><a href="/taxonomy/term/191">league-tables</a></div><div class="field-item even"><a href="/taxonomy/term/245">chance</a></div></div></div>Wed, 07 Nov 2007 17:28:13 +0000david41 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/41#commentsLottery expectations
https://understandinguncertainty.org/node/40
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><img src="/files/LottoIcon2.jpg" width="100" height="100" alt="Level2 Lottery" class="imgLeft" /><a href="/node/39" title="National Lottery" class="level1"><em>National lottery</em></a> shows how many times each of the numbers has come up in the main National Lottery draw, and what were the gaps between appearances of each number. Here we look at whether the observed distribution of the number of times each of the 49 numbers has come up fits with what would be expected with a truly random draw, and whether the gaps also correspond to what might be expected.</p>
<!--break--><h2>The number of appearances of each number</h2>
<p>If the lottery balls are being chosen at random, then the distribution of the number of times each ball comes up should follow the theoretical shape shown in white. Use the playback controls below the animation to restart, pause, or fast forward the draws. </p>
<div id="flashcontent0">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/Lotto2.swf", "/files/Lotto2.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent0");
</script><p><a href="/files/Lotto2.swf">Click to enlarge the animation</a></p>
<p>Of course the actual distribution is more jagged, but the theoretical distribution allows us to see whether the 'leading' number is surprisingly far in front. Below we see the final observed distribution with an approximate theoretical distribution superimposed. The fit looks good, suggesting, as we would expect, that there is no systematic preference for particular numbers.</p>
<p><img src="/files/images/lottery-final-dist_0.png" width="500" height="499" alt="lottery final distribution" class="imgCentre" /></p>
<p>In <a href="/node/41" title="Is the Lottery biased?" class="level3"><em>Is the Lottery biased?</em></a> we consider some of the mathematics behind the theoretical distribution of counts, and how we can check if the observed distribution is in conflict with the theoretical one.</p>
<h2>Are the gaps what we would expect?</h2>
<p>If you run the animation below, then if the lottery balls are being chosen at random, the distribution of the gaps should follow the theoretical shape in white when you click on 'Show histogram' and then 'Show theoretical'. This theoretical distribution is known as a <em>Geometric </em>distribution and is derived in <a href="/node/41" title="Is the Lottery biased?" class="level3">Is the Lottery biased?</a>. </p>
<div id="flashcontent1">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/LottoRuns.swf", "/files/LottoRuns.swf", "100%", "422", 8, "#FFFFFF");
so.addVariable("speed", "10");
so.write("flashcontent1");
</script><p><a href="/files/LottoRuns.swf">Click to enlarge the animation</a></p>
<p>After 1240 lottery draws, with 6 main balls being drawn each time, $6\times 1240 = 7440$ numbers have been drawn, and so there are 7440 gaps between two draws of the same number (the gaps until the first time each number is drawn are included in this total). The histogram below shows the distribution of all these 7440 gaps, with the theoretical geometric distribution superimposed. The gaps are divided into those below and above 40, so that the large gaps are clearly displayed: the theoretical distrbution seems to fit the observed distribution well, although there are inevitably some jagged bits in the tail.</p>
<p><img src="/files/images/gap-distribution_0.png" width="500" height="499" alt="lottery gap distribution" class="imgCentre" /></p>
<p>The longest gap observed is 72, for number 17 , which appeared on draw 435 on 23rd February 2000, but did not appear again until draw 508 on 4th November 2000. How surprising is it to get a gap as large as this? After a <em>specific </em>occurrence of a <em>particular </em>number, this is extremely surprising, and there is only 8/100000 chance of such an extreme result. However, when we take into account that there were 7440 gaps observed and this was the largest one, it turns out that it is not surprising at all. In fact 72 is almost exactly the average maximum gap one would expect in a series of 1240 lottery draws!</p>
<p>Alternatively we can use the power of the computer to simulate 'fictional' lotteries, by picking 6 different numbers at random from 1 to 49, and then repeating this process as long as we want. The software contains 'random number generators' that should ensure that each number really does have an equal chance of being chosen. We simulated 1000 full lottery histories and found the longest gap in each history. These 1000 longest gaps had the distribution shown below: 420 out of 1000 were 72 or more. </p>
<p> <img src="/files/images/maxgaps_0.png" width="500" height="467" alt="distribution of maximum gaps" class="imgCentre" /></p>
<p>As another example of using simulations, looking backwards from 20th October 2007, we saw that ball 14 was not drawn until the 53rd draw. The graph below shows the results of simulating 1000 lotteries until all the numbers had come up. In 60 of these simulations we had to wait until at least 53 draws before all the numbers had come up, showing the time we had to wait for ball 14 was not really very surprising.</p>
<p><img src="/files/images/lottery-firstalldrawn-simulation_0.png" width="500" height="467" alt="lottery first alldrawn simulation" class="imgCentre" /></p>
<p>In <a href="/node/41" title="Is the Lottery biased?" class="level3"><em>Is the Lottery biased?</em></a> we consider the mathematics behind the theoretical distribution of gaps.</p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-2">level 2</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/3">animation</a></div><div class="field-item odd"><a href="/taxonomy/term/50">gambling</a></div><div class="field-item even"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item odd"><a href="/taxonomy/term/101">evidence</a></div><div class="field-item even"><a href="/taxonomy/term/102">lottery</a></div><div class="field-item odd"><a href="/taxonomy/term/191">league-tables</a></div></div></div>Wed, 07 Nov 2007 16:08:45 +0000david40 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/40#commentsNational Lottery
https://understandinguncertainty.org/node/39
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><img src="/files/LottoIcon1.jpg" width="100" height="100" alt="Lottery" class="article-icon" />The UK National Lottery began on 19th November 1994 and there had been 1240 draws up to 20th October 2007. The jackpot prize is won by choosing in advance the 6 numbers that will be drawn from a set of balls numbered from 1 to 49. We can use the history of the lottery to illustrate many aspects of the theory of probability: how each draw is individually unpredictable, and yet the overall history shows predictable patterns; how a `league table' of numbers can be created that appears to show some numbers are preferentially drawn, and yet the table is completely spurious; how to test whether the balls are truly being drawn at random; how extremely unlikely events will occur if you wait long enough, and so on.</p>
<!--break--><div class="txtLeft">The animation below shows how often each of the 49 numbers came up in the first 1240 draws. Use the playback controls below the animation to restart, pause, or fast forward the draws.
</div>
<div id="flashcontent0">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/Lotto1.swf", "/files/Lotto1.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent0");
</script><p> <a href="/files/Lotto1.swf">Click to enlarge the animation</a></p>
<p>Starting from 1994, note how the 'leader' changes, until one number seems to gain a substantial lead. </p>
<p>If you click on 'Show histogram', you can create the current distribution showing how often each of the 49 numbers has come up. Press 'Start dropping' to see how that histogram arises. The distribution seems quite spread out, with some numbers appearing much more often than others, but in fact this apparent spread should be purely due to chance. <a href="/node/40" title="Lottery Expectations" class="level2"><em>Lottery Expectations</em> </a>considers what sort of distribution of total appearances of each number we would expect when lottery balls are chosen at random. </p>
<h2>What about the gaps between numbers?</h2>
<p>Using the animation above, work backwards from October 2007 and see how long you have to wait until the last number appears. Do you think this is surprising? We can use probability theory and simulations to explore how long we have to wait for a number to come up.</p>
<p>The animation below shows the gap between each time a number comes up.</p>
<div id="flashcontent1">You need to <a href="http://www.adobe.com/go/getflashplayer">install the Adobe Flash Player</a> to see the animation.
</div>
<script type="text/javascript">
var so = new SWFObject("/files/LottoRuns.swf", "/files/LottoRuns.swf", "100%", "422", 8, "#FFFFFF");
so.write("flashcontent1");
</script><p><a href="/files/LottoRuns.swf">Click to enlarge the animation</a></p>
<p>Can you see the longest gap that has occurred? Look carefully from the start of 2000. Do you think this is surprising? <a href="/node/40" title="Lottery Expectations" class="level2"><em>Lottery Expectations</em> </a>considers what sort of gaps between numbers we would expect when lottery balls are chosen at random.</p>
<h2>Further reading and links</h2>
<p>This is based on an idea by <a href="https://www.dcs.qmul.ac.uk/~norman/papers/probability_puzzles/league_tables.html" title="Lottery distribution" class="external">Fenton </a> on considering the lottery results as a league table.</p>
<p>A spreadsheet with the full lottery history can be downloaded from the main<a href="http://www.national-lottery.co.uk/player/p/results/resultsHistory/resultsHistoryDownload.do" title="Lottery history downloads" class="external"> UK National lottery site</a></p>
</div></div></div><div class="field field-name-taxonomy-vocabulary-7 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Levels: </div><div class="field-items"><div class="field-item even"><a href="/levels/level-1">level 1</a></div></div></div><div class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above"><div class="field-label">Free tags: </div><div class="field-items"><div class="field-item even"><a href="/taxonomy/term/3">animation</a></div><div class="field-item odd"><a href="/taxonomy/term/50">gambling</a></div><div class="field-item even"><a href="/taxonomy/term/100">ranking</a></div><div class="field-item odd"><a href="/taxonomy/term/102">lottery</a></div><div class="field-item even"><a href="/funstuff">Fun Stuff</a></div><div class="field-item odd"><a href="/taxonomy/term/191">league-tables</a></div></div></div>Wed, 07 Nov 2007 15:59:07 +0000david39 at https://understandinguncertainty.orghttps://understandinguncertainty.org/node/39#comments