The Tiger That Isn't

Home > Other > The Tiger That Isn't > Page 15
The Tiger That Isn't Page 15

by Andrew Dilnot


  The simple questions to establish the truth ought to be easy to answer: How many operations were there? How many deaths? How does that compare with others? Simple? The inquiry took three years.

  Audrey Lawrence was one of the Bristol inquiry team, an expert in data quality. We asked her about the attention given to proper record keeping and the data quality for surgeon performance at Bristol. First, who kept the records?

  'We found that we could get raw data for the UK cardiac surgeons register collected on forms since 1987, stored in a doctor's garage. It was nothing to do with the Department of Health, that's the point. The forms were collected centrally by one doctor, out of personal interest, and he entered the data in his own records and kept it in box files in the garage.' There was no other central source of data about cardiac operations and their outcomes.

  Next, how reliable was that data?

  'My own experience of gathering data in hospitals was that it was probably not going to be accurate. We were very concerned about the quality of the data. All we had were these forms, and so we went round the individual units to see what processes had been followed. We found, as we suspected, that there was considerable lack of tightness, that there was a great deal of variety in the way data was collected, and a lot of the figures were quite suspect. It was quite a low priority [for hospitals]; figures were collected in a rush to return something at the end of the day.'

  So what, if any, conclusions could be drawn?

  'The findings were fairly consistent that mortality in Bristol really did seem to be an outlier with excess mortality of 100 per cent, but had it been in the region of 50 per cent, the quality of the data was such that we could not have been confident that Bristol was an outlier. We were sure, but only because the numbers were so different.'

  And did that conclusion mean there could be more places like Bristol with excess mortality, if only of 50 per cent, that it would be difficult to detect?

  'Undoubtedly.'

  It is disconcerting, to say the least, that attention to data quality can be so deficient that we still lack the simplest, most obvious, most desirable measure of healthcare quality – whether we live or die when treated – to an acceptable degree of accuracy. Why, for so long, has it been impossible to answer questions such as this? The answer, in part, is because the task is harder than expected. But it is also due to a lack of respect for data in the first place, for its intricacies and for the care needed to make sense of it. The data is often bad because the effort put into it is grudging, ill thought-through and derided as so much pen-pushing and bean-counting. It is bad, in essence, often because we make it so.

  To see why data collection is so prone to misbehave, take an example of a trivial glitch in the system, brought to us by Professor David Hand of Imperial College, London. An email survey of hospital doctors found that an unfeasible number of them were born on 11 November 1911. What was going on?

  It turned out that many could not be bothered to fill in all the boxes on the computer and had tried, where it said DoB, to hit 00 for the day, 00 for the month and 00 for the year. Wise to that possibility, the system was set up to reject it, and force them to enter something else. So they did, and hit the next available number six times: 11/11/11; hence the sobering discovery that the NHS was chock-full of doctors over the age of 90.

  Try to measure something laughably elementary about people – their date of birth – and you find they are a bolshie lot: tired, irritable, lazy, resentful of silly questions, convinced that 'they' – the askers of those questions – probably know the answers already or don't really need to; inclined, in fact, to any number of other plausible and entirely normal acts of human awkwardness, any of which can throw a spanner in the works. Awareness of the frailty of numbers begins with a ready acknowledgement of the erratic ways of people.

  Professor Hand said to us: 'The idealised perception of where numbers come from is that someone measures something, the figure's accurate and goes straight in the database. That is about as far from the truth as it's possible to get.'

  So when forms from the 2001 Census were found in bundles in the bin, or dumped on the doormat by enumerators at the end of a bad day of hard door-knocking and four-letter hospitality, or by residents who didn't see the point; when there was an attempted sabotage of questions on religious affiliation through an email campaign to encourage the answer 'Jedi Knights' (characters from the film Star Wars); when some saw the whole exercise as a big-brother conspiracy against the private citizen and kept as many details secret as they could, when all this and more was revealed as a litany of scandalous shortcomings in the numbers, what, really, did we expect, when numbers are so casually devalued?

  The mechanics of counting are anything but mechanical. To understand numbers in life, start with flesh and blood. It is people who count, one of whom is worried her dog needs the vet, another dreaming of his next date, and it is other people they are often counting. What numbers trip over, often as not, is the sheer, cussed awkwardness and fallibility of us all. These hazards are best known not through obscure statistical methodology, but sensibility to human nature. We should begin by tackling our own complications and frailties, and ask ourselves these simple questions: 'Who counted?' 'How did they count?' 'What am I like?'

  In 2006 about £65bn pounds of government grant and business rates was distributed to local government – a large number, about £21 per person per week – overwhelmingly on the basis of population figures from the Census, for everything from social services to youth clubs, schools to refuse collection. A lot rides on an accurate Census. In 2006, preparation for the 2011 Census was already well underway (five years ahead was a trice in the planning for that Herculean task, requiring 10,000 enumerators and expected to cost about £500m, or roughly £8 a head, for every adult and child in the country). Some of the statisticians responsible for making it work held a conference about risks.

  By 'risks', they meant all of us: the counters, the counted, the politicians occasionally tempted to encourage non-cooperation, the statisticians who failed – being human – to foresee every human difficulty; we are all of us just such a risk. There are technical risks too, and what we might call acts of God, like the foot and mouth outbreak in the middle of the last Census, but the human ones, in our judgement, are greatest. The whole exercise would be improved if people were less tempted to disparage data.

  One of the less noticed features of the numbers flying about in public life is how many critical ones are missing, and how few are well known. One of the most important lessons for those who live in terror of numbers, fearing they know nothing, is how much they often share with those who purport to know a lot.

  In the case of patient records, the flow of data creates huge scope for human problems to creep in. Each time anyone goes to hospital, there's a note of what happens. This note is translated into a code for every type of procedure. But the patient episode may not fit neatly into the available codes – people's illnesses can be messy after all: they arrive with one thing, have a complication, or arrive with many things and a choice has to be made of which goes on the form. Making sure that the forms are clear and thorough is not always a hospital priority. There are often gaps. Some clinicians help the coders decipher their notes, some don't. Some clinicians are actually hostile to the whole system. Some coders are well trained, some aren't. Although all hospitals are supposed to work to the same codes, variations creep in: essentially, they count differently. The coded data is then sent through about three layers of NHS bureaucracy before being published. It is not unusual for hospitals looking at their own data once it has been through the bureaucratic mill to say they don't recognise it.

  Since Bristol, there are now more systems in place to detect wayward performance in the NHS. But are they good enough to rule out there still being centres with excess mortality that we fail to detect? No, says Audrey Lawrence, they are not.

  And that is not the limit of the consequences of this difficulty with data col
lection in the health service. The NHS in England and Wales is introducing a system of choice for patients. In consultation with our GP, we are promised the right to decide where we want to be treated, initially from among five hospitals, and in time from any part of the health system.

  Politicians have been confident that the right data can make it easier to choose where to go for the best treatment. Alan Milburn, then Secretary of State for Health, said, 'I believe open publication will not just make sure we have a more open health service, but it will help to raise standards in all parts of the NHS.' John Reid, also when Secretary of State for Health, said, 'The working people of this country will have choice. They will have quality information. They will have power over their future and their health whether you or I like it or not.'

  The crunch comes with the phrase 'quality information'. Without quality information, meaningful choice is impossible. How do we know which is the best place to be treated? How do we know how long the wait will be? Only with comprehensive data comparing the success of one doctor or hospital with another, one waiting list with another. More often than not, this data is quantified.

  At the time of writing, several years on from Mr Milburn, and after Mr Reid too has moved to another job, the best that Patient Choice can offer by way of quality information is to compare hospital car parking and canteens; on surgical performance, nothing useful. There is one exception, though this is not routinely part of Patient Choice. In heart surgery, surgeons have set up their own web site with a search facility covering every cardiac surgeon in the country and listing, next to a photograph, success and failure rates for all the procedures for which they are responsible (it is shortly to be amended to show success rates for the procedures they have actually carried out). But even this is unadjusted for the seriousness of the cases they treat, so we don't know if the surgeon with the worst results is actually the best, taking on the most difficult cases. Otherwise, mortality data is available for individual hospitals, but not routinely to the public through Patient Choice. It can, however, be found with persistence, and is sometimes published in the newspapers.

  In Wales, it seems not to be available to the public at all. For more than a year, in conjunction with BBC Wales and the Centre for Health Economics at York, we tried to persuade various parts of the Welsh Health Service either to disclose how many people die in each hospital or to allow access to the hospital episode statistics to allow the calculation to be made independently. The degree of resistance to disclosing this elementary piece of information is baffling and illuminating. The Welsh Health Service argues that the data might compromise patient confidentiality, but refuses to produce even the total mortality figures for the whole country, from which the risk of any individual patient being identified is nil. So we do not know how well the whole system has been performing in even this modest respect, let alone individual hospitals. It is quite true that the data would need interpreting with care because of the high likelihood that some unique local circumstances will affect some local mortality rates, but this is not a sufficient excuse for making a state secret of them. In England, academics and the media have had access to this kind of information for twelve years. Patients in England do not seem to have suffered gross abuses of their confidentiality as a result. The Welsh authorities tell us that they are now beginning to do their own analysis of the data. Whether the public will be allowed to see it is another matter.

  Not that this data would get us all that far: 'It's all right, you'll live,' is a poor measure of most treatments, with not much relevance, it is to be hoped, to a hip transplant, for example. Most people want a better guide to the quality of care they will receive than whether they are likely to survive it, but it would at least be a start and might, should there be serious problems, at least alert us to them.

  A culture that respected data, that put proper effort into collecting and interpreting statistical information with care and honesty, that valued statistics as a route to understanding, and took pains to find out what was said by the numbers we have already got, that regarded them as something more than a political plaything, a culture like this would, in our view, be the most valuable improvement to the conduct of government and setting of policy Britain could achieve.

  What can we do on our own?

  There are times when we are all whistling in the dark. And sometimes, in a modest way, it works: you know more than you think you do. We have talked about cutting numbers down to size by making them personal, and checking that they seem to make human sense. A similar approach works when you need to know a number, and feel you haven't a clue.

  Here is one unlikely example we have used with live audiences around the UK and on Radio 4. How many petrol stations are there in the UK?

  Not many people know the answer and the temptation is to feel stumped. But making the number personal can get us remarkably close. Think of the area you live in, and in particular of an area where you know the population. For most of us that is the town or city we live in. Now think about how many petrol stations there are in that area. It is hard if you have only just moved in, but for most adults this is a fairly straight-forward task, and people seem very good at it. Now divide the population by the number of petrol stations. This gives you the number of people per petrol station in your area. For us, the answer was about one petrol station for every 10,000 people. Most people give answers that lie between one for every 5,000 and one for every 15,000.

  We know the total population of the UK is about 60,000,000. So we just need to divide the population by the number of people we estimate for each petrol station. With one petrol station for every 10,000 people, the answer is 6,000 petrol stations. With one in 5,000, the answer is 12,000 petrol stations. The correct answer is about 8,000. The important point is that almost everyone, just by breaking things down like this, can get an answer that is roughly right. Using the same ideas would produce roughly accurate numbers for how many schools there are, or hospitals, or doctors, or dentists, or out-of-town supermarkets.

  All that is happening is that rather than being beaten by not knowing the precise answer, we can use the information we do have to get to an answer that is roughly right, which is often all we need. As long as we know something that is relevant to the question, we should be able to have a stab at an answer. Only if the question asks about something where we have absolutely no relevant experience will we be completely stumped.

  The best example of this we could come up with was: 'How many penguins are there in Antarctica?' Here, it really did seem that unless you knew, you could bring very little that helped to bear on the question. Apart from penguins, you will be surprised by how much you know.

  10

  Shock Figures:

  Wayward Tee Shots

  Shock figures demand amazement or alarm. A number comes along that looks bad, awe-inspiringly bad, much worse than we had thought; big, too, bigger than guessed; or radically different from all we thought we knew.

  Beware. When a number appears out of line with others, it tells us one of three things: (a) this is an amazing story, (b) the number is duff, (c) it has been misinterpreted. Two out of three waste your time, because the easiest way to say something shocking with figures is to be wrong. Outliers – numbers that don't fit the mould – need especial caution: their claims are large, the stakes are high, and so the proper reaction is neither blanket scepticism, nor slack-jawed credulousness, but demand for a higher standard of proof.

  Greenhouse gases could cause global temperatures to rise by more than double the maximum warming so far considered likely by the Inter-governmental Panel on Climate Change (IPCC), according to results from the world's largest climate prediction experiment, published in the journal Nature this week.

  These were the words of the press release that led to alarmist headlines in the British broadsheets in 2005. It continued:

  The first results from climateprediction.net, a global experiment using computing time donated by the general public, show that av
erage temperatures could eventually rise by up to 11°C, even if carbon dioxide levels in the atmosphere are limited to twice those found before the industrial revolution. Such levels are expected to be reached around the middle of this century unless deep cuts are made in greenhouse gas emissions.

  Chief Scientist for climateprediction.net, David Stain-forth, from Oxford University said: 'Our experiment shows that increased levels of greenhouse gases could have a much greater impact on climate than previously thought.'

  There you have it: 11°C and apocalypse. No other figure was mentioned.

  The experiment was designed to show climate sensitivity to a doubling of atmospheric carbon dioxide. Of the 2,000 results, each based on slightly different assumptions, about 1,000 were close to or at 3°C. Only one result was 11°C. Some results showed a fall in future temperatures. These were not reported. A BBC colleague described what happened as akin to a golfi ng experiment: you see where 2,000 balls land, all hit slightly differently, and arrive at a sense of what is most likely or typical; except that climateprediction.net chose to publicise a shot that landed in the car park. Of course, it is possible. So are many things. It is possible your daughter might make Pope, but we would not heed the claim, at least not until she made Cardinal. As numbers go, this was the kind that screamed to be labelled an outlier, and to have a red-flag warning attached, not an excitable press release or broadsheet headlines.

  In January 2007, this time in association with the BBC, climateprediction.net ran a new series of numbers through various models and reported the results as follows: 'The UK should expect a 4°C rise in temperature by 2080 according to the most likely results of the experiment.'

 

‹ Prev