“Management of patients on NHS waiting lists”
The first substantive item is Audit Scotland’s section 23 report “Management of patients on NHS waiting lists”. We welcome Caroline Gardner, the Auditor General for Scotland, and, from the performance audit group in Audit Scotland, Barbara Hurst, director; Angela Canning, assistant director; Tricia Meldrum, portfolio manager; and Jillian Matthew, project manager. I invite the Auditor General to present her report to the committee.
How the national health service manages waiting lists is very important to patients and the public, who rightly want to know that people are being treated fairly. Reducing waiting times has been a key policy initiative for successive Governments. However, public trust was put at risk following evidence that NHS Lothian manipulated waiting lists and disadvantaged patients in 2011 to avoid reporting that it was failing to meet waiting time targets.
Thank you very much. There are clearly complex issues to do with information technology and the way in which patient journeys were monitored and recorded, but the core question that the report tries to address is whether the waiting times statistics that were published in recent years could be considered reliable and accurate. Were those figures reliable and accurate?
We know from events over the past 18 months that the waiting times figures that were published for NHS Lothian were not reliable and accurate. We now know from the audit work that we have carried out that it is simply not possible to verify whether all the use of social unavailability codes was in line with the guidance and reflected a true period of unavailability that was discussed with the patient or their GP.
Can we have confidence in the figures that were published over recent years?
The problems with the waiting list management systems and the information that has been recorded in them means that it is not possible for the NHS boards, the Government or anybody else to verify absolutely that that is the case.
The December 2012 figures have been published today. They show that the 18-week waiting time target has been met in 90.9 per cent of patient journeys. Given the changes that have been made and the new guidance that has been issued, can we have confidence that today’s figures are reliable and accurate?
My colleague Jillian Matthew has spent the past 24 hours analysing the recently published figures. One of the challenges that we face is that we are now in a transitional phase and the figures are the first ones that reflect the new ways of measuring and managing waiting times that were introduced last autumn. I will ask Jillian to give you a quick picture of what we know so far about the figures, with the caveat that, as you said, they are freshly released.
We had a look through the figures yesterday when they were published. The picture is quite a complex one, as Caroline Gardner said, because of the transition. The treatment time guarantee came in on 1 October last year. It took quite a while to work through the statistics. They are presented slightly differently from how they were presented in previous publications. In-patients and day cases are separated according to whether people were added to the list before or after the treatment time guarantee came in. We had to do a bit of work to join those together and to see what was happening.
Are we at the point at which we can have absolute confidence in the reliability and accuracy of the figures?
The most recent publication says that there is limited data available and that the boards are still updating their systems to meet the new guidance. They are saying that that might be done by the summer. At the moment, they have just presented summarised data to ISD. ISD usually gets much more detailed, patient-level information, but it does not have that for the latest quarter, so it cannot monitor what is happening on a patient level as closely as it normally can.
Is it fair to say that there is still some dubiety about the figures that have been presented?
The way in which the figures have been presented is such that what we can tell from them is limited.
My final question is about the failure to recognise that there might a problem here. In paragraph 64 of your report, you say:
As is the case with all other aspects of the health service, managing waiting times is the responsibility of NHS boards and the Scottish Government rather than of auditors. The point that I was making in my report was that we think that the focus of attention of the Scottish Government and NHS boards during 2011 was on whether the 18-week treatment target time was being achieved rather than on how it was being achieved. If NHS boards and the Government had looked at the other information that was available, such as the information on the increasing use of social unavailability codes, that should have raised some warning signs that would have merited further investigation. It is very important that wider use is made of the information around any target, and that seems not to have happened in this case.
So it is your view that information was available, which the NHS boards and the Scottish Government either turned a blind eye to or failed to notice, and that that was not information that would naturally have emerged as a result of the regular audits of the boards.
No. As you would expect, the management of clinical services is not the main focus of the annual audit work that is carried out. As we make clear in part 3 of our report, it is true that information was available on the increase in use of social unavailability codes during that period and, for some boards, a high number of retrospective adjustments to the number of patients who had been recorded as being socially unavailable. That information should have rung warning bells for the health boards and the Scottish Government. It was not acted on, but it could have helped to avoid some of the concerns that have been raised since then.
That takes me nicely to my first question, which is about the warning bells and the fact that they did not ring.
We reported on the matter in 2010—three years ago—when the new ways guidance was first introduced. At that point, we recommended that there should be greater clarity about the use of social unavailability codes and, in particular, their use in relation to patient choice and we said that NHS boards had an important role to play in the scrutiny and management of waiting times in their area. We also produced guidance for NHS board members and a checklist for them to use in doing that. We feel that, if our recommendations had been implemented at that time, the system would have been tighter and clearer for patients to understand and for the NHS and the Scottish Government to manage.
Basically, you are saying that the warnings that you gave three years ago were ignored.
The recommendations were focused on ensuring that the purpose of social unavailability codes was clearer and that NHS boards were carrying out their role effectively in the scrutiny of waiting times more generally.
The fact is that NHS boards did not carry out their role effectively, and that social unavailability codes have not been made clearer. We have a system in which internal auditors, ISD, health boards, the Scottish Government and so on can all ignore what you say. That seems to me to be the system that we have. Audit Scotland made recommendations in good faith, but it has led to an even more complex set of figures that even the Auditor General, with respect, cannot understand.
This committee is an important part of the system, Mrs Scanlon. We make a report. We do not have powers to direct anybody to do things. We have powers to bring what is happening to the Parliament’s and the public’s attention, and you have the power to hold people to account for that. That is the way in which the system is intended to work.
My next question is on the management culture and staff being scared to report bad news, which was the case in NHS Lothian. When the waiting time target was reduced from 18 weeks to 12 weeks, health boards were basically expected to carry out the same amount of procedures in 65 per cent of the time and with no additional resources. Did that ring any alarm bells? Was it an impossible task for health boards? Why did they not ask for more resources? Why did they not say that it could not be done? Was it the pressure of the waiting time targets that forced them to muddle and manipulate the figures?
We highlight the fact that the boards with the highest use of social unavailability codes in some specialties appear to have had capacity pressures in those specialties. We also report that the Scottish Government was working with some health boards to develop capacity to tackle the problems. I ask Barbara Hurst to give a bit more detail on that.
Waiting time targets in themselves can be a good thing. Obviously, they help people to focus on the issues that matter to patients, but they are also a really good barometer of when there might be capacity pressures. If a service is failing to meet the target, there is something going on in the system. In a sense, a failure to meet a target is not necessarily something to get beaten up about. It is an alert about what is happening in the system.
That is the point that I am getting at. Rather than muddling or fiddling the figures, why did the boards not say, “Look, we just can’t do this in the available time”? There are references throughout the report, but paragraph 60 states that evidence
As Barbara Hurst said, the target in itself is not a bad thing. Where it becomes damaging is if there is a focus on the target without people looking at the wider picture and how it is being achieved. We know, because of the investigation that was carried out in NHS Lothian, that there was found to be a bullying culture. Bullying is a difficult issue for auditors to get to grips with, as you will understand.
If I may correct you, it is not only in Lothian—you mention
That is exactly what our report demonstrates.
Did the codes appear in order to manipulate the figures, because boards were frightened to say that they did not have the capacity to treat the patients within the waiting times?
I absolutely understand your point. As auditors, we have to focus on the evidence that is available to us. Because of shortcomings in the way in which waiting lists have been managed, we have very limited evidence, on the one hand, of clear, inappropriate use of the codes. On the other hand, there is a pattern that is very hard to explain involving a significant increase in the use of unavailability codes—which fell off after the problems in NHS Lothian became apparent—at the same time that the number of people waiting 12 weeks and more started to increase.
We have been very critical of NHS Lothian. Is it not the case that you were able to find manipulation and falsifying of figures at NHS Lothian because it was the only health board in Scotland that had accurate figures, and that the figures from the rest of the boards were in such a muddle that you could not find any fiddling of their figures? We should be grateful to NHS Lothian, in fact, because it was the only health board in Scotland with efficiently compiled figures, which proved what we set out to prove. Is it a concern to you that the figures for the 13 other health boards were in such a muddle that you could not find a fiddle?
It is indeed a matter of significant concern that the waiting time systems and information are not good enough to verify that they are being used properly. It is not true to say, however, that NHS Lothian’s figures were the only accurate ones. It was clear that the board had been manipulating the number of patients who were recorded as socially unavailable in order to appear to meet the waiting time targets, whereas patients were in fact waiting longer than they should have been waiting.
I am having a bit of déjà vu because quite a number of reports that we have considered have highlighted the difficulties of extracting information from legacy systems. Here, again, we are debating statistics.
We looked at 3 million transactions. I will ask Jillian Matthew to talk you through the methodology and give you a sense of how we went about that work.
As Mr Beattie suggested, waiting list systems are very complex and hold massive amounts of data. We commissioned specialists with experience of that type of data to extract the data. As for the 3 million transactions that we looked at, any change to, say, unavailability, a start or end date or whatever in a patient record counts as one transaction. Of course, one patient record might have a few transactions, but that was the volume on which we based the data extraction.
So you carried out a trend analysis of the patient records first.
Yes.
Then you did some individual sampling within that.
Yes.
What percentage did you sample?
We looked at a number in each board. I should note that in exhibit C at the end of the appendix that we published with the report, we break down for each board the number of records that we sampled. The amount is based not on a percentage but on what emerged from the data analysis, and we then looked at a number of records within each board. We looked at a total of 310 patient records, but that was based on large numbers of transactions or patterns that we saw.
So you looked at 310 across the boards that you were examining.
We did more detailed field work on those records.
Within those 310 records, you found a small number of what you called errors. How many are we talking about?
We found one or two instances of errors in records from all the boards that we looked at, but sometimes it was not possible to tell whether a code change was appropriate or whether there had been an error, because there was a lack of evidence and no notes in the records.
So, equally, you could not tell whether it was inappropriate.
Yes. The example from NHS Grampian, which appears in the report at paragraph 43, involved the medical unavailability code being applied in error. There were high numbers and the same end date, and then we saw that the social unavailability code was applied straight afterwards, but that code should have been applied in the first place.
It is obvious that you looked at the 310 records that you mentioned for a reason: because they looked suspicious in some way. How many of those records came up with errors?
In around 20 records we could tell that there was an actual error, but, as I said, there was a lack of evidence. The common issue that kept coming up related to social unavailability. NHS Forth Valley had good notes in the records—particularly for in-patients—on why the unavailability codes had been applied and on discussions with the patients, but the other boards had either no notes at all or very limited information about why those codes had been applied, and we could not reach a conclusion about whether the coding had been appropriate.
So, across the board, 20 errors were found out of 3 million patient transactions.
That was from the small sample that we looked at, based on all the data analysis that we did for all the transactions.
In all my years in audit, this has been the most data-rich, data-intensive exercise that we have done. I remember sitting in the office poring over the patterns that looked very unusual; in those 3 million transactions, there were a lot of unusual patterns.
I realise that, given the volume of transactions, you can do only trend analysis, as you are clearly not going to check 3 million transactions. I suppose the problem is that you have no benchmarking against which to compare the exercise; you are just making a judgment about what constitutes an anomaly. When you look at the trend analysis and see what looks like a spike, you need to decide whether, logically, there seems to be a problem. Were you able to do any benchmarking against other areas of the United Kingdom?
As far as I am aware, nowhere else in the UK has done such detailed analysis. We say in the report that we felt that the boards were giving reasonable explanations for some of those spikes—for example, there might have been batch transactions. However, we also say that there are other areas in which there was no explanation or evidence. We are trying to get that complexity across in the report.
Ms Gardner, you said that we must use all the information available and that perhaps not everyone has been as good at that as they should have been. Your report has been interpreted in quite an interesting fashion by some—for example, it has been said, not by Audit Scotland but by some politicians, that it shows that one in three or one in four people are on hidden waiting lists. For the period that you looked at, for people with a wait of more than nine weeks, the figures were 3 per cent reported but 23 per cent actual waits. If we look at the same period using ISD figures, which include everyone who was medically or socially unavailable, they show that only 5.7 per cent of people did not get treatment in under 18 weeks. In other words, ISD has reported that 94.3 per cent of all patients were seen in under 18 weeks. Do you accept that?
I do not recognise those specific figures or the time period that you are talking about. We would be happy to investigate that, if you want to give us more detail. I am happy to say that it is clear that waiting times have shortened markedly over recent years and waiting time performance has improved—there is no question about that. The concern is the extent to which the information that is available to the NHS, patients and the public is reliable enough, given the concerns about the way that it is being managed, and transparent enough.
I completely agree. Just for clarity, for the time period that you looked at—you did an excellent job in relation to that—did you cross-reference your figures with ISD’s reported figures?
Yes.
It included information on waits of 18 weeks or less. There were two outturns, one of which was when people were removed when social or medical unavailability was applied. However, when those people are included—they are not hidden away somewhere but are in full public glare in the ISD’s figures—it is shown that 94.3 per cent of all patients were seen in under 18 weeks. Is that a figure that you recognise? Did someone look at ISD figures for that time period?
Yes. If you look at page 29 of our report, you will see that we have a section that talks about the reported waiting times, the adjusted ones and the actual unadjusted waiting times. It is worth saying that, as far as I understand, Scotland is the only part of the UK that reports both parts of the equation, which is a good thing.
Sorry to interrupt, but that would refer to the 3 per cent in the case that we have been talking about.
I am reluctant to talk about numbers without being clear what timescale we are talking about, but it refers to the smaller number of patients waiting longer than 18 weeks. The number of patients with unadjusted waits of longer than 18 weeks will be higher, as you would expect, but it is quite hard to find that information through the performance reporting that goes on.
I suppose the reason for asking the question is that I find it concerning when I see figures such as 23 per cent or 31 per cent of patients with social unavailability. What I am trying to get at is that when you include those patients in the overall figures for the waiting time for treatment of 18 weeks, 94.3 per cent are still being treated within 18 weeks. That provides a context. However, I totally agree that the cases in which social unavailability was, perhaps, wrongly applied are completely unsatisfactory.
We had a number of challenges in auditing what was happening at NHS Greater Glasgow and Clyde, starting, as Jillian Matthew said, with the fact that the IT system was not able to give us the large-scale data that we asked all boards for to enable us to analyse the patterns.
When we looked at a sample of those records, we found that little evidence was recorded of the reason why the patient had been coded as unavailable or to confirm that there had been a discussion with the patient or their GP before the code was applied. Again, we did not have evidence in the records to confirm what had happened.
NHS Greater Glasgow and Clyde has moved from 11 IT systems to three and we hope that it will move to one in the near future. Did that create a significant issue for the board’s ability to record and audit?
The board told us that that was the reason why it could not give us the data that we asked for. I do not know whether we can say anything more about the effect of the IT systems in Glasgow than that.
There were certainly some challenges in the board’s ability to record information on the systems that it had. Although one of the systems could record only a small number of characters, there still was some capacity to record information; it was just not always being done.
The Auditor General mentioned that waiting targets can be positive and that, when they throw up additional demand issues, health boards should move to resource the demand. In Glasgow, the figures for social unavailability have come down because of that. Is that how the Auditor General would expect health boards to act when such figures become available? Is it an appropriate use of their resources?
It is an appropriate response to any target. As Barbara Hurst said, targets can be helpful in focusing public services’ attention on something that matters to the people who use them, but they can become dangerous when they are used in a narrow way that drives behaviour without thought given to the wider consequences. As Mary Scanlon said, the response should always be not to try to hit the target but to ensure that a system is in place that can deliver what the service looks to achieve.
Thank you very much.
Did health boards prioritise targets over patient care?
I do not think that there is any way for us to answer that, Mr Scott—
So how do we find out whether or not that is true?
The information that we have in the report demonstrates that there was a focus on the target in ways that were too narrow during the period that we have reviewed. We have demonstrated that there was other information available that could have highlighted those warning signs. I think that it might be entirely appropriate for the committee to explore with the health boards concerned and with the Government the way in which that information was used.
We might be better asking the whistleblowers—the brave people in the NHS who were prepared to stand up and say what was really going on.
For much of this, it is very difficult to see any alternative to members of staff at all levels of the health service and Government being prepared to talk openly about the challenges that people are facing in achieving targets.
I think that you said earlier—do correct me if I am wrong—that focusing on the target itself without looking at the wider picture is dangerous. Is that what was going on?
It appears to have been the case during 2011, which was the period that we looked at, that there was a very strong focus on whether patients were waiting 18 weeks or less. There clearly was information available at health board level and at the national level that would have identified emerging pressures in terms of both the increase in the use of the social unavailability code and the number of retrospective changes to that in some wards. That information was available but was not being used, either by NHS boards to manage their own local performance or by the Scottish Government to take a picture of the NHS as a whole.
When you say that the information was available, do you mean that it was available to the chief executives of health boards or to the boards themselves? To whom was the information available?
That appears to have varied significantly. We report in part 3 of my report that the roles and responsibilities, between the Scottish Government and ISD, for example, were not clear. ISD had very clear information about the number of retrospective changes to social unavailability codes that were being made in each board. It was not clear to ISD how it should raise those concerns with the Scottish Government—
Did ISD raise the issue with the Scottish Government?
Apparently not. The responsibilities for monitoring the information and for acting upon it seem not to have been clear enough all the way through.
So, in so far as you were able to ascertain, health board chief executives knew that there were problems. What did they then do with that information?
We know that the information was available, but we do not know who was looking at it and acting upon it, either within the Scottish Government or within health boards. It appears that that varied.
Did you ask health board chief executives what they did with the information?
That was a focus of the internal audit reports last year, which found very significant variation in practice. Equally, the report that we published in 2010 included a checklist for NHS board members on what they should focus on. It is very clear that the information highlighted pressures that were worth further investigation.
You said earlier that your recommendations in that 2010 report were not acted upon by health boards or the Government. That was your clear evidence. We should ask the respective bodies why that was the case.
Our recommendations were not implemented in full, and we think that they would have helped to avoid these problems.
Yes, they could have avoided the problems, and they were not implemented in full.
What we can say is that the information was available. I do not know who was using it and how they were interpreting it, but the information was available. We know that there were discussions between the Scottish Government and a number of health boards about pressures in particular specialties, but the wider picture of not just the target itself but the trend in the use of social unavailability codes and the extent to which they were being adjusted retrospectively was not part of the dialogue that was being had about the way that waiting times were being managed. That scrutiny was not good enough.
Indeed. The very warning that you gave in your earlier evidence that the wider context is important was ignored by the NHS system in total.
The recommendations that we made at that point were not fully implemented.
Therefore, patient care was put secondary to the target.
I cannot speculate on the reasons for that lack of implementation. That is something that you would have to pursue with the Government.
I take that point. It is very fair of you to say so.
I will ask Tricia Meldrum to give you the detail on that, as I was not here at that point.
Sorry, that is very unfair of me.
At that time, we highlighted the risks around the social unavailability code being used in different ways, such as for physical unavailability—people being on holiday. We also noted that the code was being used in some boards—to a far lesser extent at that time—to reflect patient choice to wait to be seen locally. There was no way of separately identifying that patient choice. We recommended that a new code should be introduced so that any capacity issues could be identified, as we have discussed.
That is very fair. We can obviously ask about that.
We focused on the information that was being used by the boards, by ISD and by the Government to monitor waiting times.
But in order to understand the wider context point that you have very fairly made a number of times this morning, we need to interrogate what was discussed at those meetings to see whether that wider context was considered at any stage.
You might want to follow up that point with the Scottish Government and with health boards.
I was delighted to hear you acknowledge the work on waiting times lists that has been done by this Scottish Government since it came in—going from 104,000 down to 65,000 is quite a drop in the numbers.
We were conscious that the new use of codes was introduced in 2008 and when we were agreeing the factual accuracy of our report with the Government, it raised that as a possible explanation for the pattern that we had seen. We have done some further work to analyse it and it is clear that the levels of use of the old availability status codes, which translated across into the new social unavailability codes, were pretty similar at the time of the transition. There may have been some variation in how quickly health boards made the move, but the levels were not markedly different at that stage.
If you look at exhibit 7 on page 25, you will see that we were interested in the different patterns of social unavailability. As Tricia Meldrum said earlier, when we did our previous report, the social unavailability figure in 2008, which is the reddish line on the second graph, was pretty similar to what it had been in the previous system.
Would the transfer of the codes include the 35,000 people who were taken off waiting lists?
I ask Tricia Meldrum whether we can answer that question.
I do not know. Sorry, but which 35,000 are we talking about?
There were people on waiting lists who were removed from them when they missed their appointment. Would that have included that number of people? Could part of the problem be that those people were not on the lists and then started to be fed into them?
This is in 2008.
Yes—sorry. I am talking about in 2008 when the new system started.
I think that, at that stage, there was an exercise to quality assure waiting lists as people moved to the new system. There are various reasons why people can be removed from waiting lists, such as if they miss appointments or if it becomes medically unsuitable for them to be on the list. That is all part of the mix, but it was part of the mix across the period that we looked at. Therefore, we do not feel that that helps to explain the trend between 2008 and 2011. Although that trend is interesting, we are more interested in the reduction in the use of social unavailability codes after June 2011, when the problems in Lothian came to light.
I have just one other question. You have talked about your recommendations to the Scottish Government. Is it not the case that, in the vast majority of cases, the recommendations from internal audit and from Audit Scotland have been taken up by the Scottish Government?
Absolutely. Back in 2010, we identified two particular recommendations that we think could have made a difference. The first was on the use of a separate code for patients who are unavailable because of their choice—because they would prefer to wait longer so that they can be seen by a particular specialist or at a particular hospital. The second recommendation was on the need for patients who have special support needs to be identified and treated appropriately throughout their waiting time period. As Tricia Meldrum explained, neither of those recommendations was implemented fully, and we think that they would have helped to make the figures more transparent and to give us a clearer picture of why the use of codes was as high as it was in some boards and for some specialties.
Do you accept that the new patient unavailability code and the whistleblower phone line should help to alleviate some of the problems? Sorry—I said that I had just one more question, but that takes me on to the real last one. This does not apply to Audit Scotland, but there seems to be a culture of guilty until proven innocent on the bullying. For example, in Tayside it was shown that nothing went wrong, but it is still being used to make it look as if health boards are behaving inappropriately. Do you agree that people should perhaps step back a wee bit, work on the facts and then make their judgments after that, rather than on the basis of supposition, which many people are doing?
You asked two questions there. The first was about whether the new guidance will address all the problems. We certainly think that, when the guidance is fully implemented, it will help. There is still a risk with outpatients, where there is no requirement to confirm periods of unavailability in writing, and questions still need to be clarified about the definition of a reasonable offer. However, those measures will certainly help if they are implemented fully.
We are here because of the reported and confirmed manipulation of waiting list figures at NHS Lothian and we are trying to find out whether that was widespread across the country. The key message that is repeated in the reports and other papers is the rise from 2008 in the use of social unavailability codes. To my mind, the most interesting trend is that, after the abuse of the system in Lothian was reported, all of a sudden, across Scotland, the use of social unavailability codes dropped.
You are absolutely right: that is unclear. That is due to problems with the systems. The audit trails that would let us examine what changes were made are not in place in all the systems that are in use; the fields for recording information about the use of patient unavailability codes, for example, are generally not being completed, except in NHS Forth Valley, which was a real example of good practice; and the staff to whom we spoke did not raise concerns with us. We cannot speculate on the reasons for that: all that we can do is tell members what evidence we have found and report it as clearly as we can.
I clarify that we did not speak to 400 staff as part of this audit. That figure includes all the internal audits.
If you did not speak to 400 members of staff, were you given access to appropriate members of staff—those who made the changes to patient records and made patients socially unavailable? Were you able to speak to those members of staff at the front line, who could have indicated why the use of the codes dropped all of a sudden after what was found to be happening in NHS Lothian?
I do not think that we had any concerns about the staff to whom we had access. However, Tricia Meldrum and Jillian Matthew carried out the work, and I will let them answer your question, if I may.
It is fair to say that the boards were all very helpful to us and there were no difficulties with our having access to any staff to whom we wanted to speak.
I want to return to the theme of guilty until proved innocent, which James Dornan mentioned. There seems to be a sense of that running through the whole debate.
As we have said in the reports and as I have said today, we have not found any evidence of manipulation at all. The wording on the aim of the work is very clear. It was about looking for an indication of
I fully understand that, and I fully expected that you would answer in that way.
I completely agree that there is an issue with the completeness of data. The data that is needed to manage this very important NHS target needs to improve. However, we also report the fact that the data that is available was not being used to identify where there might be problems, where pressures were building up in the system and so on. There is a real question about the management and scrutiny of this area of work.
We must bear it in mind that we are the Public Audit Committee, not the Health and Sport Committee. The lesson that I am taking from the issue concerns data and the need for an accurate and consistent approach to gathering data to help us to deliver the kind of service that we want. We must listen to advice from the Auditor General about getting systems and processes consistently applied across Scotland. I hope that we are doing that with the new ways tracking system that is in place. We need to embrace that and the recommendations that the Auditor General and her predecessors have made about data gathering and collection.
Clearly, there was an issue around data and data collection. Was that simply an IT problem, or did Audit Scotland identify other problems around the way in which the data was recorded and collected? We often conflate IT systems and data collection and I think that they are not always exactly the same thing.
That is right. It is clear that the IT systems need to improve, and the move across Scotland to the use of the TrakCare system should improve things.
I think that you paid tribute to NHS Forth Valley for the accuracy of its data, which you were able to interpret. Did it add in more than it perhaps should have, given the system that was in place? It seemed as though you were quite happy with looking at its data and the conclusions that you came to about its data. What was it doing that was particularly better than anyone else? We should certainly want to learn that lesson.
The key thing is that NHS Forth Valley was using the facility in the patient management system—which is in every system that is in use—to record the reason for the use of the social unavailability code, so that we could verify that it was being used properly and that its use was in line with the guidance and reflected a conversation with the patient or their GP, which means that the longer waiting time was a result of the patient’s unavailability, not a decision that was taken by the health board.
The deputy convener has a supplementary question, which she promises me is short.
The words, “guilty” and “not guilty” have been used. We know that NHS Lothian was guilty. Is it the case that the verdict on the other health boards would be “not proven”, due to a lack of evidence?
I am sorry, but I cannot answer that question. I can report the evidence that I have found. The committee can speculate on the reasons that underpin that.
I thank the committee for the opportunity to come to today’s meeting and I thank Audit Scotland for doing a complex piece of work that I am sure engaged its staff for many hours.
I will need to ask colleagues to answer, as I was not in Scotland at that point.
Yes.
I would like to explore some of the relationships that you will have had some dialogue about, principally the one between ISD and the Scottish Government. I find it inconceivable that there was not discussion between ISD and Scottish Government civil servants or the director of workforce and performance, who has responsibility for waiting times. Did you find evidence of any such discussions, formal or informal? I find it equally inconceivable that, on such an important area of Government policy, no audit or monitoring reports were routinely presented to ministers.
We cover that issue on pages 36 and 37 of the report, in paragraphs 68 to 71. As you say, it is clear that the Information Services Division had more information available than the Scottish Government was publishing about performance on waiting times. It is also clear that the roles and responsibilities were not as clear as they needed to be on such an important issue. In paragraph 68, we highlight that ISD has a role to play in
You picked your words very carefully. I think that you suggested that ISD had more data than was published. Was that additional data shared with the Scottish Government?
That is an issue that the committee would need to explore with the Government. Jillian Matthew touched on this earlier, when she talked about the waiting times figures that were published yesterday. ISD has a huge amount of data. That is one of the massive strengths of the NHS in Scotland. ISD does not just have aggregate data on the performance of health boards on waiting times; it also has patient-level data, which allows a great deal of analysis to happen.
I think that you said earlier that the new 12-week waiting time guarantee was introduced at the time when the use of social unavailability was probably at its highest or was becoming quite high. What would you say was the Government’s reason for not scrutinising the data that would have been available, which would have acted as a warning bell for what was about to happen?
Again, I cannot speculate on the Government’s motives for that. That is something that the committee would need to explore with the Government.
Would it be fair to say that its eye was off the ball?
I cannot speculate on that.
Okay. Thank you.
Exhibit 6 of the report highlights the trend for Scotland as a whole and for Scotland excluding Lothian, and it shows a high of about 31 per cent falling to around 15 per cent in September, which were the latest figures that were available until yesterday.
Is it reasonable to assume that the dramatic fall that we have seen since June 2011 is down to changes in IT systems?
There has not been a significant change in IT systems over that period. A number of systems are in place across Scotland and either the report or the appendix contains a lot more detail about them.
The graphs in exhibits 8 and 9 on pages 28 and 29 are quite instructive because they suggest that those health boards and specialties with the highest volumes were the ones in which staff were using the codes the most. That suggests to me that the problem is not with IT but one of capacity and pressure in particular areas that had wide variation across health boards. Is that a fair assumption from my examination of those exhibits?
It is a stretch to make that assumption across those two tables but, for example, on page 26 we highlight the challenges that Greater Glasgow and Clyde Health Board faced with two specific specialties—orthopaedics and ophthalmology—that had high levels of the use of social unavailability codes. The reason that the board gave for that was that patients were choosing to be treated only in their local hospital—the question that that raises for us is about the capacity of those hospitals to meet local demand. However, because there was no separate patient choice code at that point, it is not possible for us to verify that that was the case. That is the sort of interplay between pressure and capacity that we see, with the social unavailability code as the overall umbrella.
Is it reasonable or acceptable that 70 per cent of the 900 patients in Glasgow who were waiting for orthopaedic in-patient treatment received that code? I understand that you did not interview patients, so you have no way of verifying whether what you were told about their choice of consultant is true.
It is certainly a high level compared to what we have seen across the piece for other specialties, although those tend to be high-pressure specialties. Beyond that, I need to come back to where I started. The systems that are available and the information that is recorded in them do not let us verify whether the reasons that the board gave to explain the pattern can be demonstrated in practice.
I have one absolutely final question, convener.
We have highlighted across the piece that it is not possible to verify that social unavailability codes have been used in line with the guidance, which previously contained some ambiguities. This issue matters to patients and its management really needs to be improved. We reported on it 2010; it is a matter of significant importance that needs to be got right now.
I call Mr Keir, to whom I should apologise—I did not see him indicate earlier—and then I will bring in Dr Simpson for a brief question.
I am glad that it has been accepted that the waiting time has come down substantially over the years.
As Tricia Meldrum made clear in her earlier response, you are quoting from a report on the impact of the 2010 audit that we produced for internal purposes but which is available on our website as part of our general commitment to transparency. We do that for every piece of work that we carry out to varying degrees of intensity.
My real question is about IT. Over the years, there have been different forms of patient administration systems; indeed, I believe that, at one time, Glasgow had 11. How satisfied are you with the speed with which what we might call an improved TrakCare system is being put in place by boards around the country? Are we moving at a rate that is acceptable and which provides assurances to audit with regard to a far more robust system of performance management?
We understand from the Scottish Government that all NHS boards are likely to be using TrakCare by the end of this calendar year. It is certainly one of the better systems available; one of the appendices that we have produced, which sets the features of each of the systems against the good practice that we would expect, shows that TrakCare covers most but not all areas. The committee might want to explore the finer details with the Government if it decides to take this work forward, but I repeat that although IT systems are important they are not the only part of this. Even with TrakCare, information about the use of different codes, the confirmation of unavailability with patients and so on will still need to be recorded and health boards will still need to provide a clear definition of a reasonable offer of treatment that patients can understand and which lets them know what they can expect. That all needs to happen if we are to overcome past problems.
Given the relatively recent information that we have, are there any concerns about the future management audit system that will be put in place? I know that you have made recommendations in the report, but where will the real difficulties arise in providing a robust audit on the waiting time numbers?
I cannot provide any clear answer to that question. However, I can say that my report’s recommendations need to be implemented to ensure that every board has and uses an IT system with appropriate controls and audit trails, that they are fully used to record all the necessary information and that NHS boards and the Government use that information to scrutinise the wider picture of performance.
I have one more question, which relates to the first issue that I raised. I assume that the views in the 12-month internal impact report stand, and what is in the report was correct at the time. Why is there such a massive change between what you found then and what you are seeing now? I am still confused about the difference between the conclusion in your internal report and where you are now.
The distinction is about what that impact report is. The internal report is not an audit report. The report that you have before you is an audit report, in which we have gone through a significant amount of work to look at the 3 million transactions on patients’ records and to drill down to understand what is happening across that as far as possible with the information that is available.
The fact is that the report recommended that there was no requirement for a follow-up.
If we roll back a bit, the impact report is our fourth report on waiting lists over the past 10 or 11 years. Clearly, waiting lists are an important topic. We would have followed up the 2010 report at some point. We decided not to in 2010 because the accountable officer for the health service wrote to all the boards instructing them to improve their recording of people with particular special needs. We thought that that was a good response to our report and that it should have generated improvements. As it turns out, it did not generate improvements.
I raised the increase in the number of people who were listed under the social unavailability code with the then cabinet secretary back in early 2010, which was before Audit Scotland’s report. I was given that explanation for the increase, which I found unlikely.
We made a recommendation on people’s needs back in 2010. While we were carrying out the impact work, we looked into what was happening in that area. We were disappointed to find that people’s special needs were not being flagged well enough for those people to be supported through an incredibly complex system.
Is it clear from the work that you have done that the 393,161 transactions of offers of an appointment within three days, which do not constitute a reasonable offer—the definition of that is 21 days—were not used in any way to indicate that someone had refused an offer?
Mr Simpson, you are testing my definition of brief, although it is a good question.
I am sorry.
We have reported that there are problems with the definition of a reasonable offer. For example, there is no reason not to offer the patient treatment outside the local area and outside the terms of what a reasonable offer looks like, if it becomes possible to offer treatment sooner than the patient would otherwise get it. The system goes wrong if the patient does not take up that offer, but is then treated as having turned down an offer under the guidance.
No, that is very clear.
The limitations in the systems themselves and the limitations in the information that is recorded in patient records mean that it is not possible for us to verify that patients are not being treated as having turned down an offer in the way that I described. We found some examples in which that is happening. For example, the internal audit report on Tayside identified that patients were being told that it was unlikely that they would be treated within 18 weeks and that if they recognised that, they were being coded as unavailable. That clearly is not what the codes were intended to achieve. However, that is another reflection of the broad problem that we have identified: the IT systems are not good enough and are not being used well enough, and the information is not being used as part of that to manage and scrutinise something that is important to patients.
I thank our witnesses. We will consider later how we will take the report forward. We are well over time, but clearly it was an important report to discuss.
“Commissioning social care”
Agenda item 3 is the section 23 report “Commissioning social care”. We have a response from the Scottish Government to our submission to its consultation on the integration of adult health and social care in Scotland. This item is on our agenda to enable us to consider and decide what we want to do with the response. We could note it, or we could refer it to the Health and Sport Committee, which is the lead committee on the matter. Alternatively, if we wish, we can write back and ask further questions or ask for further clarification.
Having been on the health committees in the first two sessions of Parliament, I find it quite sad that the Scottish Government’s response states:
I second Mary Scanlon’s comments. It is disappointing that the Scottish Government is having to legislate, but it is appropriate that it does so. There is evidence that the will might be there at some levels for health and social care departments to work together, but it does not exist at all levels. It is important that legislation is introduced.
I should perhaps put on record that I am the deputy convener of the Health and Sport Committee, so I suspect that we will be passing this issue to the other committee on which I sit.
You can take it with you.
If we saved a postage stamp, would that be an efficiency saving? If it was reinvested, I suppose that it would be.
On the point that Mary Scanlon made, the focus of the legislation will be on delivering the nationally agreed outcomes rather than just on working together. Quite clearly, the emphasis will be on the national outcomes and it is not about legislating to work together.
The Government’s response says that the aim of the legislation is to “to work together”.
If you read the whole sentence, you can see quite clearly that it says:
The broad consensus is that we should submit the response to the Health and Sport Committee in the full confidence that Mr Doris and his colleagues will do a significant job of scrutiny on the new legislation. Is that agreed?