Skip to main content
Loading…
Chamber and committees

Enterprise and Lifelong Learning Committee,

Meeting date: Wednesday, May 2, 2001


Contents


Teaching and Research Funding (Scottish Higher Education Funding Council Review)

The Deputy Convener:

The first item on today's agenda is our inquiry into the Scottish Higher Education Funding Council and teaching and research funding. I once again declare an interest as a member of the court of the University of Strathclyde. Would anyone else like to declare an interest?

I previously worked at a college of further and higher education and I am still involved with the Adam Smith foundation, which is a charitable organisation attached to Fife College of Further and Higher Education.

I previously worked at Glasgow Caledonian University and I was a member of the court of the University of Glasgow.

The Deputy Convener:

That makes us sound the best-informed committee in the Parliament on this topic.

As members know, we started the inquiry by looking into the SHEFC proposals for teaching grant and research work. It became clear that the real issues lay on the research side, so the committee decided to pursue that matter. The SHEFC allocations depend on the results of the research assessment exercise. Understanding what lies behind that is an important prerequisite for the inquiry.

I am grateful to John Rogers, who joins us this morning. Welcome, Mr Rogers. We appreciate your making yourself available at such short notice. As manager of the UK funding councils' research assessment exercise, your views will be absolutely invaluable to our better understanding of the issues. On behalf of the committee, I thank you for being here—we are indebted to you. I will shortly ask you to give a brief statement, after which the committee will ask questions. I propose to deal with items 1 and 2 and then take a break, after which the rest of the meeting will be in private.

Without further ado, I ask Mr Rogers to make a brief statement. We have received your briefing material, which was most helpful. We are grateful to you for that. If you have any elucidatory remarks to make, they would be much appreciated.

John Rogers (Research Assessment Exercise):

Thank you, convener, and good morning to you all. I am the manager of the research assessment exercise. I am based at the offices of the Higher Education Funding Council for England in Bristol, but my remit covers Scotland, Wales and Northern Ireland as well. My post and the posts of all of my team are co-funded by the four higher education funding bodies.

The research assessment exercise, which, in its 2001 guise, got under way yesterday when we began the assessment phase, is the fifth RAE to take place in the United Kingdom. The first was in 1986 and the most recent was in 1996. The period between assessments is now five years.

The process will be familiar to members from the briefing material. We invite all publicly funded institutions of higher education to make submissions describing their research activity in any or all of 68 subject-based units of assessment. Panels of experts then assess the submissions of research. There is usually one panel for each unit of assessment, but in some circumstances there are joint panels. We have a total of 60 panels. Typically, a panel comprises 12 senior staff—mainly academic staff from the institutions. As appropriate to the discipline, it may also include senior representatives from industry, the health service and the voluntary and public sectors who use academic research.

In the current process, we have just begun the assessment phase. Institutions were required to make their submissions of research activity by the close of 30 April. Our panels will spend the rest of the summer assessing the submissions and the results will be published in the middle of December.

The results of the exercise principally take the form of a grade. The grade is expressed on a seven-point scale, which runs from 1 to 5*—1 being the lowest and 5* being the highest. To achieve a 5* grade, a department in an institution must have more than half the research activity that it presents assessed as being of a standard of international excellence. A rating of 1 is where there is little or no evidence of research activity at all. The intervening grades are determined with regard to different proportions of research activity that reach standards of national and international excellence.

The submissions look back over five years, or seven years in the case of subjects in the arts and humanities. The principal piece of evidence is a listing of publications and other research outputs by the academic staff of the university. The institution may list up to four research outputs for each member of staff. We have a broad definition of research and of the type of research outputs that may be submitted for the exercise. Anything that embodies the outcome of original research undertaken by a member or members of staff in a university can be presented. In addition to traditional academic publications, we receive policy reports, patents, devices, designs, works of art, sculptures, performances and exhibitions—a wide range of types of output is presented.

The largest form of output is articles in academic journals, followed by books and chapters in books. The proportions of the type of work that is submitted will vary between subjects. In medicine and science, journal articles are almost the only form of output that is presented. Books and book chapters become more common in the social sciences, arts and humanities. The performing and visual arts put forward a bewildering array of different types of outputs. In some disciplines, such as engineering, proceedings of conferences are common.

The funding councils all use the results of the RAE to inform the allocation of the largest part of their research funding to institutions. The formulae that the funding councils use differ slightly, but in principle they are the same: the departments that attract the highest grades are those that attract for their institutions the largest share of the available funding.

In every case, the formula that is used has three elements. It has a quality element in which the rating from the RAE is translated into a parameter of the funding formula. For example, SHEFC attaches no funding to grades 1 or 2. Grade 3b attracts a funding element of 1, and there is a 55 per cent increment between each step up to grade 5. Grades 5 and 5* are funded by SHEFC on the same basis at the top of the scale.

The other two elements in the funding formula are the volume of research activity that is presented, which is principally determined by the number of research-active academic staff who are submitted in each subject, and a cost factor, which takes into account the differences in cost of conducting research in broad subject areas. For example, the cost factor that SHEFC uses in medicine and other intensive laboratory subjects is 1.6, the baseline being 1, which is set for classroom and library-based subjects.

That is probably a sufficient overview. The briefing document that I sent, which is about to be printed—I apologise that it is not in printed form; it is literally at the printers—contains further information. I will respond to members' questions.

The Deputy Convener:

Thank you, Mr Rogers. That was most helpful. I will open the question session with one or two general questions. I assume that you are familiar with the SHEFC consultation paper "Research and the Knowledge Age", which concludes that the policy of selective funding for research has

"contributed to improvements in the … quality of research"

in the UK. Are we entering an age where, if we practise selectivity, there will be institutions that either do not get research funding or find it extremely difficult to enter the echelons where funding could be made available?

John Rogers:

That is certainly possible. Currently, there are some institutions that receive no research funding. They tend to be small, specialist institutions with a tradition of teaching. There are others that receive funding only in selected areas. Because of the scale of selective funding, there is a considerable difference between the institutions that derive most of their funding from selective funding—the research-intensive institutions—and those that are funded in only some areas. For example, in England, the average annual funding for research is about £6 million per institution. The top institutions receive around £60 million, so there is a difference of a factor of 10, although there are only two institutions in that top bracket. The majority of institutions fall below the average, I think.

One of the positive features of the RAE and the funding methods that are attached to it is that it enables excellence in research to be identified wherever it exists. Each of the funding councils is at a slightly different stage of its policy review. England and Wales are conducting parallel reviews to the Scottish review, as you know. The English review, which is slightly further advanced, has again found that important pockets of excellence are being identified and funded through the RAE and associated funding methodology, even in institutions that are not otherwise particularly engaged in research. That is in part why the English funding council has decided that it intends, if at all possible, to continue to fund departments that are graded 3b and 3a.

It is becoming more difficult to remain competitive in research in this country, but it is also becoming more difficult for this country to remain internationally competitive in research. We have examined the performance of the UK research sector in terms of its international impact and have found, encouragingly, that we are doing tremendously well, especially given the relatively low public expenditure on research in the UK in comparison with some of our major competitors. However, maintaining that performance is tough. Institutions have to work constantly to maintain a competitive position. Certainly, without that sustained effort, some institutions could move to a position where they do not receive public funding from this part of the whole picture.

Proceeding on that general line of inquiry, I am not clear where institutions on a rating of 1, 2 or 3 on the scale from 1 to 5* are placed. How do they have a research future? Where does the funding come from for them?

John Rogers:

The grant for research for the institution is calculated on the basis of amalgamating the performance in each of the units of assessments to which they submit. A university that was strong in one field of engineering might obtain funding based on its performance in that area but in no other. The grant that is passed to institutions is passed as block grant funding. Institutions have discretion to spend the money as they see fit. One reason why the RAE and the selective funding method has tended to enjoy the support of institutions is that it allows that degree of flexibility. All the other research funding that universities attract is tied to specific projects. The money that is provided by the funding councils in each part of the UK represents about a third of the total research funding that is won by universities; two thirds is tied to specific projects and one third is for them to use at their discretion to maintain and develop high-quality research.

Typically, a university will have a variety of different performances in different units of assessment. It will receive a grant that is based on the aggregate performance. It will spend that in pursuit of its own determined strategies and research ambitions. An institution that had only departments rated 1 and 2 would receive no funding from the funding councils, although it would still be eligible to compete for all other sources of funding. Some institutions have a track record of attracting industrial funding, even where they have relatively low RAE grades.

Des McNulty:

What progress has been made to identify more clearly to those people who submit to research assessment exercises the criteria that will be applied in specific disciplines? A problem in a peer review process is that sometimes there is a lack of knowledge about what is to be done. There is also a fair amount of evidence of significant variation between disciplines. That issue has been picked up in the higher education press and elsewhere. What steps have you taken to deal with those issues?

John Rogers:

Those are critical questions for the exercise. One of the stated principles by which we are running the 2001 RAE is transparency. When the RAE began in the 1980s, it was regarded as a secretive process. The deliberations of panels took place in private, as they still do. That is important, as it enables panels to have completely open and frank discussion about the work of people who are their colleagues and sometimes their friends. There is confidentiality in the process, but that is not an excuse for making the process less transparent than it needs to be.

We have sought to give more open information about the way in which the process is conducted. Written statements of the criteria that panels apply to the assessment were developed for the first time in the 1996 RAE. For 2001, we have taken that a stage further. We have much more fully developed statements of criteria and working methods. Those statements were developed through a process of consultation—panels put out draft criteria to the relevant subject communities and other stakeholders for comments, which were taken on board when the final criteria were framed.

At the end of 1999, we published the criteria in full form and distributed them to all institutions. As with all the documentation about the RAE, the criteria are available via our website to everyone in the research community and to the public. The position of those criteria was strengthened when, at the time of their publication, we gave a binding undertaking that panels would conduct their assessments in accordance with their stated criteria. Therefore, panels are now required to proceed in accordance with the criteria that they have developed in consultation with their subject community and that have been sent to institutions more than a year in advance of the completion of the submissions. I would not claim that we have got things perfect yet, but we have made an important step forward in improving the transparency of the process.

Linked to those criteria, we also intend to publish written feedback for institutions on the reasons for the grade that they have been awarded. We intend that that will happen for the first time in the 2001 exercise. One of the legitimate criticisms of the RAE in the past was that feedback came only in the form of a number, which was unhelpful. Additional feedback tended to depend on how well people knew the chair of the panel and on how much they could lean on him or her to say something about what had gone on. That is obviously unsatisfactory, so we have given an undertaking that we will give every institution a written statement on the reasons for the grades that have been awarded.

The written feedback statements are being developed by panels as part of the process of assessment. They will take the form of a statement set in the context of the panel's criteria. The statement will show how the criteria have been applied to the evidence that the institution has presented. For 2001, the feedback reports will be sent only to the head of the institution that is making the submission. As with all new features of the exercise, we take care to introduce things cautiously, without doing anything that might jeopardise an institution's development. Just as the criteria have developed and become more explicit, the feedback arrangements must become more explicit, more transparent and more open.

We have been concerned about consistency between panels, which was the second point that Des McNulty raised. We checked the system by using independent evidence and we know that some panels score rather harder or rather softer than their near neighbours do. That is to nobody's advantage, because it means that disciplines may receive less funding than they might have received and that, in some instances, they may receive more funding than their performance merits. Translated to institutions, that means that the subject in each institution would be overfunded or underfunded to the extent that the panel had marked up or down.

We know that the performance in cognate areas is not as variable as the RAE grades in some areas might suggest. For example, in medical and biological sciences, the performance of institutions in the UK as a whole is generally much better in international terms, as measured by bibliometric impact, than the RAE grades might suggest. As another example, we know that the performance between the three areas of mathematical sciences—pure maths, applied maths and statistics—is broadly similar, yet the 1996 RAE grades suggest that statistics is rather weaker than the other two areas. That was a function of different scoring practices among the panels.

We have introduced some important mechanisms to address that problem for 2001. Again, we have made consistency one of the stated key principles by which the exercise is run. For the first time, we have introduced umbrella groups—meetings of the chairs of each of the panels in broad subject groupings. One of their key concerns is to ensure that there is a reasonable degree of consistency in the panels' approach to the assessment and in the setting and application of standards of excellence. During the criteria setting, the umbrella groups were helpful in identifying areas of good practice and in sharing ideas about approaches to the assessment. This summer, they will meet again to examine the provisional grade profiles from all panels in the broad areas to ensure that variations in grade profiles for the subject reflect genuine differences in performance rather than variations in marking practice.

It is also important that we have consistency against the international excellence benchmark standard. For 2001, we have introduced a requirement that all our panels consult a number of advisers who are experts in their discipline and are based outside the UK. Once a panel has developed a provisional grading, it will consult about five of those advisers. The key question that the advisers are being asked is, "On the basis of the profile of grades and the submissions that you have examined, do you believe that the panel has set and applied the benchmark standard of international excellence correctly?" That is an important additional perspective for the process.

As part of the process, we have given institutions the right to request that their work be looked at by more than one panel. That is particularly important where an institution's work is interdisciplinary. Those practical measures are designed to improve the consistency of the process, particularly in cognate subject areas.

Des McNulty:

Thank you for that helpful answer. However, there is a danger that some of the design failures in the assessment system are being addressed by a process of bureaucratic restructuring that is being used as a safeguard against more fundamental problems.

One of the criticisms of the 1992 and 1996 RAEs is that it was difficult for institutions and departments that were putting forward submissions to assess in advance how assessment outcomes would link to funding outcomes. Institutions expressed uncertainty about whether it was better for them to aim for a higher grading, on the basis that that would lead to a better funding outcome, or to put in more of their staff's research because, although the latter might achieve a lower grading, a similar funding outcome would be achieved. There is an added dimension to the situation in Scotland, because SHEFC and the Higher Education Funding Council for England have tended to point in different directions. As a result, institutions have felt a bit uncertain as to how they should pitch for funding. Have any mechanisms been put in place to reduce that uncertainty?

At the other end of the exercise, when funding is being awarded, should institutions be more transparent in converting the funding that they are awarded to supporting research activity in the disciplines for which the funding was awarded? There seems to be wide variation in the way that academic institutions respond to the money that they receive from the funding councils. I am not sure that that would be tolerated in other areas.

John Rogers:

Those are interesting questions. Before I express a view, I should perhaps make it clear what my position is with respect to those matters. My concern is with conducting the research assessment exercise throughout the UK on behalf of all the funding bodies. That means that I am quite deliberately not involved in setting funding policy for any of the funding councils. Funding policy is a question for each of the councils. Having said that, nobody would be happier than I would be if the funding rules were made explicit in advance. The question that I am asked more often than any other, when I talk to institutions about the assessment process, is the question about the link to funding.

It is fair to say that all the funding councils are acutely aware of institutions' concerns. They have tried to be more explicit at an early stage about their funding intentions. SHEFC and HEFCE, in their reviews of research policy, have set out their preferred approach to funding, in advance of the submission date for the 2001 RAE. However, funding intentions and preferred funding options are never the same as actual cash grants. There are two important reasons why actual cash allocations or more detailed statements of financial intention cannot be made in advance.

First, each council is working with a fixed pot of money and, without knowing the performance of the sector in terms of grades, it is impossible to be specific about units of resource for each set of performance grades. If you set a particular unit of resource and the sector then performs differently from your expectation, you may blow the bank—if the sector performs better—or you may be left with a pot of unallocated money. That is a technical difficulty, but councils have tried to give better early impressions of their preferred funding approaches through the process of review and consultation.

On the transparency of the practices of universities and other institutions, I should probably let the committee know that, before I moved to my present position, I was a member of the administrative staff at the University of Aberdeen and was latterly in charge of strategic planning for the university. The question of internal resource allocation models was close—I would not say dear—to my heart and is something that I have also looked at more widely in the sector.

A few years ago, there was a study of internal resource allocation practice. It was found that most institutions reported remarkably similar internal processes for resource allocation. None of the funding councils requires institutions to follow its funding models when distributing money to their departments. When practice throughout the UK was examined, it was found that virtually all institutions took significant account of a department's contribution to their overall funding council grants in making their internal allocations. Internal resource allocation models in universities typically involve having regard to performance in terms of recruiting students and in terms of research grades.

Nearly all institutions took some strategic top slice before distributing resources among their departments. That top slice, which is usually in the form of a percentage taken from the top, is designed to provide a pot of money for the institution at central level, to pump-prime important strategic developments or to give additional assistance to areas either of strength or of weakness. For example, after the 1992 research exercise in the University of Aberdeen, we found that the performance in our law faculty was well below that to which we aspired, so we invested heavily in restructuring the faculty, specifically to improve its research and teaching performance. We were able to do that because we had a strategic reserve that could help us in that work.

What also tends to happen is that, at faculty or school level, there is a similar, but smaller, degree of strategic top-slicing from departmental allocations. However, all the money ultimately feeds back into academic activity, although by slightly different routes. As for whether institutions should be more transparent in their internal mechanisms, that is probably a question for them.

Finally—

Mr McNulty, I am quite anxious to let other members have a chance, and you have had a fair old chunk of the cake so far. Do you have a final, pertinent question?

Des McNulty:

Yes.

You say that the variation between institutions might be quite considerable and that you have no specific view on whether that variation should be included in scaling. Do you feel that institutions that do not get a significant amount of research funding are thereby disadvantaged in terms of their capacity to invest strategically in research activities, as you suggest was possible at the University of Aberdeen?

John Rogers:

On the question of transparency of internal processes, the funding councils do not have any remit or locus to determine how institutions behave in pursuit of their objectives, other than to safeguard the use of public funds. We cannot plan for institutions—we do not want to plan for them and we cannot determine their internal processes or academic development.

On the advantage or disadvantage to institutions that do not have significant RAE performance, it is true that from their research income, they would have less cash available to top-slice and therefore less to invest in research. However, the research and teaching grants that are calculated by the funding councils are amalgamated and passed to institutions as a single block. It is within the rights of institutions to take an element from the total teaching and research allocation to invest strategically in the development of research or teaching activity, should they choose to do so. It is not determined that an institution must spend the research element of the calculation on research or the teaching element on teaching.

We have found that institutions that have been able to win relatively small sums of RAE funding—particularly the ex-polytechnic sector, after the 1992 abolition of the binary line—have often been able, through selective investment, to increase their performance significantly. In 1996, we saw some departments from new universities overtake their old competitors. I expect that pattern to increase in 2001.

Mr Kenneth Macintosh (Eastwood) (Lab):

How much disruption and continuity is there between each assessment exercise? Is it a very disruptive process for the institutions? Can you give a rough idea of how much the exercise varies?

In Scotland, if research funding were removed at 3b or 3a level, what percentage of institutions would be affected? What is the coverage? How many departments attract ratings of 5*, 5 and 4?

Are you asking that question in a Scottish or UK context?

I am asking in a purely Scottish context, but perhaps Mr Rogers will not be able to answer and it will be a matter for SHEFC.

John Rogers:

I do not have the data here that would let me answer that question. SHEFC would be able to provide those figures. SHEFC publishes a detailed breakdown of the allocation of funds to all institutions. I could provide that information for the other funding councils, if members would find that helpful. However, it would be better for me to provide those figures in a factual statement outside the meeting, rather than taking a guess now and getting it wrong.

Continuity is an important principle for governing the conduct of the exercise—it is a stated principle of the exercise. We want to give institutions as stable a platform for assessment and funding over time as we can. We have kept much of the exercise for 2001 the same as it was in 1996. The rating scale and the core methodology are the same. Indeed, the core methodology of expert peer review is the same as it has been from the beginning of the process. The data requirements for 2001 are virtually identical to those for 1996. The only significant addition for 2001 is that there must be a gender flag for each researcher. That information is not passed to our assessment panels, but is collected for monitoring and evaluation purposes. There is no significant disruption in terms of the formal requirements of the exercise.

We always try to assess any proposed change very carefully. We review and evaluate the exercise after each iteration, with a view to continual improvement. We will introduce a change only if we believe that its benefits and the case made for it by those whom we consult will be greater than the cost—principally the cost to institutions—of its introduction. One example of a change for 2001 is the introduction of a new staff category in order to avoid any disproportionate detriment to institutions that lose key members of staff to competitor institutions close to the closing date for submissions. There is now a category in which the former employer institutions can declare those staff and receive full credit for the quality of their work. We evaluated that change carefully, having been asked to do something about that concern.

That has been one of the most interesting features of working with institutions in preparing their submissions and helping them to understand and implement a rule change of that type. It has been disruptive to the institutions, because it is a new feature, but it is a limited change against the background of broad continuity.

Institutions have to invest heavily in preparing RAE submissions or, more specifically, in developing the research that underpins those submissions. There is no doubt that the management of research in institutions, and especially the management of the research infrastructure and environment, is much more strategic than it was 15 years ago, when selective funding and the RAE were introduced. That is reported and confirmed almost universally by institutions. That is what is responsible for the international success of the UK sector as a whole. The RAE does not create excellence; the researchers and the management of the environment for researchers in institutions have led to that success. While that creates disruption in activities, it is now seen very much as an embedded part of on-going research quality improvement, just as on-going teaching quality improvement is now embedded in institutions' systems and is no longer as disruptive as it might initially have been.

We considered carefully the cost to institutions of participation in a research assessment exercise. A study of the burden of accountability on institutions was conducted recently for HEFCE by independent consultants. As part of that, we conducted a detailed study in two universities of the different types of direct and opportunity costs to institutions of participation in the RAE. If we add the cost of participation to the direct cost to the funding councils, the RAE costs 0.8 per cent of the total public funds that we distribute using the results of the exercise. Compared to other assessment methods, for example those used by the UK research councils, it is a very efficient process. The research councils typically spend 5 per cent to 6 per cent of the money that they distribute on their assessment mechanisms.

The process is undoubtedly a major undertaking for institutions. It is of tremendous importance to them in financial terms, but even more so for their academic reputation and for achieving the quality mark that they aspire to. It requires a tremendous amount of work, and many people up and down the country breathed huge sighs of relief when they sent their submissions to me at the end of last week. I talk regularly to people in university administration, because many of them are—or at least used to be—my friends. The submission represents the culmination of much work, but the process is no more disruptive or burdensome than it need be to meet the purpose for which it exists.

Tavish Scott (Shetland) (LD):

You said in your opening remarks that there are a number of representatives from industry on your panels. You also said that the assessment process uses conference proceedings to measure performance, for example in the engineering discipline. I do not see anything else in the guide to the RAE on what might loosely be described as the commercialisation side of the equation. Is that part of the process? Does the process measure output through the applicability of the final result to the commercial sector?

John Rogers:

The application of research has been a live question throughout the RAE during the 1990s and we have taken some positive steps on that for 2001. Commercialisation of research or dissemination of scientific findings are not the RAE's only concerns. The RAE seeks explicitly to measure quality, but panels seek quality in all aspects. We are interested in quality, but we are not primarily interested in defining ways in which that might be represented. As I said, we have a broad definition of the research outputs that can be submitted. We have introduced a new category of research output for 2001 to permit confidential research reports that are produced for companies or policy-making bodies to be entered. That recognises that an important part of applied research work was previously invisible to the exercise.

More broadly, we have tried to strengthen the message about the equal treatment of applied research work within the process. We have significantly more non-academic users of research from industry, the health service, the public sector and other places on our panels, to ensure that a perspective is available within a panel's assessment process that lets it treat applied research equally. That translates into the assessment process in the requirement for panels to set out criteria that ensure that all forms of research receive equal treatment and that they are assessed on the same basis. No assumptions are made about excellence being associated with a particular form of output. Panels cannot say that an academic journal article is better than a patent or a report for a policy-making organisation. We must treat all research on its own merits.

To do that practically, panels must find appropriate criteria for different research outputs. When a significant body of applied work is likely, many panels will say that one of the measures of quality will be the impact on the discipline, policy or practice, or the impact on commercial success. The research is never an end in itself, just as the impact on the academic discipline is not an end in itself. It is important that such criteria are considered to be different, but equivalent, criteria for measuring different types of research. If research has had a significant impact on policy development—for example, by a group of social science or education researchers—it is important that that impact is assessed for what it tells the panel about the quality of the work. However, as in the academic field, appalling research can have a dramatic impact, so impact can never be a determinant on its own.

Nevertheless, it is important that panels examine the equivalent impacts, so that they have practical measures for considering different work. Patents will be considered in the context of the science that underlies them, just as academic articles would be. We have tried to treat work in equivalent and practical ways in the assessment process, rather than make grandiose statements.

We have also tried to get the message across strongly about the equivalence of treatment. We have worked for some time with the Department of Trade and Industry and the Confederation of British Industry to get that message across and to encourage academic staff and industrialists to engage more broadly. As part of the funding councils' and each Government's stated aims of improving and developing the commercialisation of research, we have tried to build that engagement into the assessment process, and we have tried to get the message to academics who are deciding to which work they will give priority. That task is the hardest part of the equation.

Tavish Scott:

I have a supplementary question that relates to a question that the convener asked. Mr Rogers said that some higher education institutions did not receive funding because they fell below a cut-off point in the rating system, and that industry sometimes funded the research applications that they had developed. Is not it a little embarrassing that industry in some circumstances thinks that those issues are important and is putting money into them, but that the Government, through its various funding organisations—through the research assessment exercise—shows that it does not consider them to be relevant or worthy of investment?

John Rogers:

That depends very much on local circumstances. Some industrial funding is attracted by universities on the basis of well-established local partnerships. We have found that, where there is considerable local success of that type, industrial funding tends—if we consider the lower grades—to be won more by departments that are achieving research gradings 3 or 4. They do a lot of work that is of national excellence and which is important because of that. Local industry, and sometimes national industry, recognises that.

Sometimes, the research funding is tied to important training programmes. There is no reason why a department must be excellent in research to be excellent in training—particularly post-qualification professional training. There is sometimes an element of research investment in developing such a training relationship.

A range of different circumstances therefore apply. There is a close correlation, if we consider the top level, between the absolute sums of money that are attracted from industry, and the absolute sums of money that are attracted through the RAE and, indeed, from the research councils. Research-intensive institutions still, on average, do better in industrial funding, charity funding and other types of funding.

Local practice is important. The pockets of excellence do well. It is relatively rare to find an institution whose departments have achieved any research grading 1 attracting significant sums of industrial funding for research. Such an institution might attract significant industrial funding for other activities, but not for research.

We could, I presume, obtain those figures from the funding council. Is that correct?

John Rogers:

The funding council will have data on that.

I will ask the clerk to ensure that those data are made available.

Mr Kenny MacAskill (Lothians) (SNP):

As an aside, I wonder what effect—if any—the reduction in expenditure on research and development as a percentage of gross domestic product has had on the funding councils. I note that the United Kingdom's expenditure on R and D has gone from 2.2 per cent of GDP down to 1.9 per cent, while Finland's has gone from 2.2 per cent to 2.9 per cent. Does that have any effect on the pressures on what are, after all, limited resources?

John Rogers:

The reduction means that researchers in our universities must work much harder to maintain their international competitiveness. One of the studies that was undertaken as part of the HEFCE's review of research policy and funding—which is applicable to the UK and is referred to in the SHEFC review documentation—examined the performance of the UK sector. In terms of the citations made of work by UK researchers—that is, the number of times that work is referred to and used by the academic community in those disciplines internationally—the UK is first in the world. Our research has a greater impact on its subject communities than that of any other country. We produce more research papers per dollar spent than any other country in the world—we are tremendously efficient.

That, in part, is the answer to Kenny MacAskill's question. The effect has been that we work extremely hard. We do not feature well in global terms with regard to the total amount of GDP that we invest in research in our universities and we do not do particularly well in comparative terms on industrial investment in R and D. We spend less than other countries, but we are doing tremendously well with the money that we spend. That undoubtedly must be to the credit of the people who do the research—they work very hard to achieve that.

Mr MacAskill:

On a separate issue, what cognisance is taken of the economic future for Scotland and the UK? I can understand why a department might receive a research grading 4 or 5, but might not be viewed—given that we are talking about limited resources—as integral to the economic well-being or future of the country. I understand that, in the Republic of Ireland and in Finland, funding councils make fairly hard choices. What cognisance is taken in the RAE of future economic interest for the nation as a whole?

John Rogers:

We are careful to ensure that work that is of national importance—whether that be national UK, national Scottish or whatever—is properly taken account of, and that work that may be of local, regional or national application is regarded as capable of demonstrating international quality.

It is important to realise that the international excellence and national excellence benchmarks are just that. That does not mean that work that is focused on Scotland, or on Aberdeen, or on health care in Grampian or Strathclyde is necessarily downgraded in the process. A principle of the exercise is that the process must be capable of demonstrating international quality.

Mr MacAskill's real question is about priority within funding allocations. All the funding councils have had the facility to introduce a priority element in deciding their funding allocations, but none of them has done so. SHEFC, in its current model, has removed the priority option. The real reason for not invoking the priority element in the funding model is that it is not supported by the research community. The reason behind that, as I said earlier, is that the money that is provided by the funding councils for research is designed to provide and develop an international-quality research infrastructure with permanent academic staff working in good research facilities, with a properly founded environment for their research. That accounts for about one third of the total research funding; the rest of it is tied to specific projects.

The other major part of public funding for research, which flows through the research councils from the Office of Science and Technology, has thematic priorities attached to at least part of it, whether that attachment is through reactive grant schemes or research programmes within which people are invited to apply.

The combination of priority or targeted research activity and the more general, open research funding that is made available as a result of the RAE process is seen as an important strength in the way that the UK funds research. The UK is unusual in having that dual-support system.

If a Government wanted to promote a specific area of its economy, in which research was poor or non-existent, how could it ensure, through the current mechanisms, that R and D funding was granted?

John Rogers:

All the public funding for research, other than the funding that is given as block grant through the RAE process, is tied to specific purposes. Any of those measures would be available to promote specific priorities.

Do the funding councils try to influence the size of the pot?

John Rogers:

The size of the pot is influenced mainly by the volume of work that is submitted in any particular research area. In Scotland, reference is also made to the average performance of that research area against the performance of the whole sector. The pot is determined by the number of people who are active in that subject throughout Scotland and throughout the UK. In Scotland, the average performance of that group of researchers, as set against the whole research community, is also recognised.

Does that mean that the funding councils do not seek to enlarge the whole pot? Are they interested simply in how it is allocated?

John Rogers:

The allocation is determined formulaically. The total amount of money that is available is divided up, principally on the basis of the volume of work that is undertaken in each area.

Mr Davidson:

The volume of applications in research ratings 4 and 5 is rising and I presume that that will suck out a lot of resources. Will that minimise the amount of money that is available for those who are seeking to attain national levels of excellence?

John Rogers:

That is certainly the case; all the funding councils are wrestling with that core question. The funding councils in England and Scotland, as part of their review process, have stated that their preferred priority, if they are forced to make a choice, is to protect the top rated departments—the 4s, 5s and 5*s. Unless more money is made available, that will mean an inevitable dilution of resource further down the ratings. The two funding councils have estimated the additional money that will be needed in the pot, if all grades that are currently funded are to be protected at current funding levels.

That brings me back to my first question. As soon as that work is done, will a submission be made to Government?

John Rogers:

The English funding council's submission, which is part of the comprehensive spending review, quotes specific figures that the council expects to need if the sector's performance improves—as it has done between previous exercises—and if there is not to be a dilution of resource for at least some units within the grade.

Mr Davidson:

When the Finance Committee met in Aberdeen, the Minister for Finance and Local Government came along. Earlier that day, we had presentations from the university sector about funding. The minister said that it was not up to him to decide how money was allocated and that that was entirely up to the funding councils. Given that, and considering the demand from the higher education institutions in Scotland for a review of the ratings and the value that is attached to them—because of the need to stimulate R and D—do the funding councils take the view that a review is needed?

John Rogers:

That question is principally for the funding councils. The review of funding policy that is being undertaken by all the councils is designed to address precisely such questions. The funding relationship is such that each of the councils is given a global allocation and a degree of policy steer about how that funding should be implemented. As agencies of the Government, the councils translate that policy steer and share out the pot of money. To that extent, the councils determine the allocation of resources, but they are allocating a fixed pot of money that is given to them by the Government.

The Deputy Convener:

I have a final question, which follows on from Mr Davidson's line of questioning. Some commentators have suggested that there may be a grade-creep issue in the 2001 exercise. It has been said that that is why SHEFC is flagging up the possibility of no funding for grade 3 departments. Do you have a view on that?

John Rogers:

There has been discussion about grade creep. I believe that what we are seeing, and what we have seen between each exercise, is an improvement in the sector's average performance. There are independent proofs of that. Our research is having greater impact than the research of any other country and that is one important demonstration of the fact that we are not kidding ourselves and scoring ourselves higher than we really deserve. As part of the improved management and performance of research in universities, we have found that we are undertaking more and better research. That is reflected in the RAE grades.

One of the reasons for our introducing non-UK-based international advisers for 2001 was to perform an additional check on the application of the benchmark standard of international excellence. We always check that the grades that we are awarding are correct and that we are not deluding ourselves. We have found that in some areas, such as medicine, we have been hard on ourselves. There is genuine improvement in the sector. The situation is similar to the classic debate about school grades: when people do better, the question arises whether the standards are getting easier. We have reasonable confidence in the RAE and can say that standards are not getting easier.

If anything, it is getting more difficult to be internationally excellent, but performance in the sector is improving, exercise after exercise. If that happens again—as we expect it to in 2001—SHEFC, HEFCE and the Higher Education Funding Council for Wales will all have precisely the problem that you describe. There will not be enough money to protect that excellence and to fund the nationally excellent work at the current levels.

Thank you very much. I appreciate the fullness of your answers. The session has been immensely helpful to the committee.