Skip to main content

Language: English / Gàidhlig

Loading…
Chamber and committees

Education, Children and Young People Committee

Meeting date: Wednesday, December 13, 2023


Contents

  • Attendance
  • Artificial Intelligence and Education

Artificial Intelligence and Education

The Convener (Sue Webber)

Good morning, and welcome to the 31st meeting in 2023 of the Education, Children and Young People Committee. Our first agenda item is an evidence session on artificial intelligence and education. I welcome Ollie Bray, strategic director, Education Scotland; Helena Good, director, Daydream Believers; Chris Ranson, physics teacher and lead for AI integration, Dunblane high school; and Professor Judy Robertson, chair in digital learning, University of Edinburgh. This is our first evidence session on the subject. Although the topic of AI has been raised in other sessions on education reform, we were keen to hear a bit more about this fast-moving area.

I invite Ollie Bray to make some opening remarks, after which we will move to questions from members.

Ollie Bray (Education Scotland)

Thank you, convener. We had a quick chat beforehand and we thought that it would be useful if I made a few opening remarks. I know that that is quite unusual for this type of evidence session, but we thought that it might be useful, given that AI and the use of AI in education are often misunderstood. An issue that we often come across in our work is the fact that although we all use the same words—for example, the words “artificial intelligence”—we are all talking about completely different things, so it is important that we try to get a common definition for such key terms.

It is important for the committee to know that there are a few reasons for that, one of which is that there is no universal definition of what we mean by artificial intelligence. The field is constantly being redefined, with some tools that had been defined as AI having now been declassified. I will mention Scotland’s definition of AI in a minute, for complete clarity.

The second reason that there is sometimes confusion about AI in education relates to the influence of science fiction, where we see humanoid-type robots walking around and we assume that they can do such things. Such behaviours are sometimes overemphasised by the media.

The third reason, which is perhaps the most misunderstood but probably the most important, is that what is easy for us as humans is very hard for artificial intelligence, and what seems hard for us as humans can often be very easy for artificial intelligence.

I will give a quick example of that. Some of you will remember that, back in 1997, the world chess champion at the time was beaten by an IBM system and, more recently—in 2017—Google’s DeepMind beat a top player at Go, the game in which you move stones around. Although both of those games are incredibly complex to play, they involve just a series of algorithms. What is interesting is that, at the time, the technology to enable artificial intelligence to pick up the chess pieces and move them around was not available. What is easy for us to do in terms of balance, movement and interpersonal skills is very difficult for artificial intelligence. What is easy to compute and run through algorithms is easier for machines. That is one of the most misunderstood things about AI.

For complete clarity, in Scotland we define artificial intelligence as:

“Technologies used to allow computers to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.”

It is also important for the committee to know that there are several subfields of artificial intelligence, some of the names for which are used interchangeably. You will have heard terms such as “deep learning”, “neural networks”, “machine learning”, “symbolic logic” and “knowledge graphs”. All of those are sub-themes of artificial intelligence.

We will not get into the details of this today, but I imagine that, when we get into the discussion, there will be an opportunity for us to talk about the generative AI tools that have been produced for commercial reasons or for social reasons that are now being applied in education. I hope that we will also be able to get into the debate about specific tools for education that use generative AI and other forms that are built for education. Those are two separate things.

We are very happy to discuss the ethics of the issue and the professional learning—or, indeed, the learning more generally—that is required here. I hope that that is a useful introductory statement to set the scene.

The Convener

Thank you. A lot of the terms that you mentioned bring fear to me, because I do not understand this sphere at all. I hope that, by the end of the session, everything will be a bit calmer in my head.

You spoke about generative AI. How does that affect how teachers and educators assess learning and understanding, particularly the outputs of unsupervised study? Everyone has fears about that. Does the use of generative AI have implications for certification practices and policies? I suppose that I am referring to cheating. Perhaps we can pick up that issue to start with. Who would like to go first?

Ollie Bray

I am happy to go first. I am sure that others will come in with opinions.

The first question that is begged is, what is cheating when we use generative AI? I think that that is an important question for us to consider. If I am writing an essay and I ask generative AI or a Google search, which is based on an AI algorithm, for a clarification question, is that cheating? If I write a paragraph and ask ChatGPT to rewrite that paragraph for me and to check my spelling and punctuation or to rephrase it, is that cheating? If I ask a simple ChatGPT, using a simple prompt, to submit the whole of the essay, claim it as my own and hand it in, is that cheating?

The point that I am trying to make is that there is a bit of a spectrum of practice around some of these things. It is more a question of thinking about how we use technology tools, including tools such as generative AI, to help us to improve the learning experience and to make that work.

Professor Judy Robertson (University of Edinburgh)

I think that to start to think about AI by thinking about cheating might be a slight distraction, because generative AI, in particular, will transform education so much that we should be thinking about what we need people to learn about and what our educated citizens will need to know in five or 10 years’ time. Maybe the kind of things that people need to know can no longer be assessed by the sorts of exams that we currently have. The spectrum that Ollie Bray mentioned is quite useful. Most of us would say that misrepresenting work that has been done by somebody else as your own work is cheating by most standards, but it is by making use of the grey areas in between that we will change the things that humans do and the things that machines do for us in education.

Helena Good (Daydream Believers)

Hello, everybody. I am a daydream believer—that is nothing to do with the Monkees and everything to do with taking the what and the why and creating a how. We are a not-for-profit organisation that is currently working with 35 high schools across Scotland on the creative thinking qualification. That is the equivalent of a national 5 and a higher—level 5 and level 6 in the Scottish credit and qualifications framework. The qualification is credit rated by Edinburgh Napier University. We anticipate that, next year, that number will double and there will be more than 75 high schools working on a creative thinking qualification.

The reason that I am speaking about assessment in this context is that the qualification moves us away from acquisition to application. There is no formal exam. Our young people work through a process, and it is the process that is recognised in that. The conversation that we are having at the minute about gen AI is very much around the question, what if we get it right? Judy Robertson and Ollie Bray are right: it is important not to fixate on what could go wrong, but to ask what we could achieve if we get it right.

If we break that down in the work that we are doing, the what-ifs are the questions—they are daily questions, as this area moves at a speed that none of us anticipated. The “we” is a really important part of that collaborative movement. We are working with industries, particularly the creative industries, on which AI is having a massive impact. We are empowering our teachers and our young people to be part of the conversation around what feels fundamentally right in terms of this experience.

I can tell you a little bit more about the qualification, but maybe that is for later.

That might come out later in our discussion. Chris Ranson, as someone who is actively working as a teacher right now, what are your thoughts on unsupervised study and the outputs from that?

Chris Ranson (Dunblane High School)

I have mixed feelings about that. I am aware of the risks that exist within the current ecosystem of how school works. If we can shift the way that school works, AI will work really well. AI could enhance the things that came out of the Muir report and the Hayward review. The issue at the moment is that we are trying to fit a tool that is much more than a tool—the use of AI represents an expansive paradigm shift. I do not think that I am overstating it when I say that.

I have a list of four things that I think we should be concerned about in bringing AI into our current ecosystem, but cheating is not specifically one of them. I think that that will iron itself out over time in that we will have to adjust how we assess. That is helpful, because it addresses the issue that Scottish education has with the examination approach. I have been thinking a lot about Goodhart’s law recently—the idea that when a metric becomes the goal, it becomes a poor metric.

What does every pupil I teach in my higher physics class want? They want an A, so that they can go on to the next step of education, rather than thinking, “Let’s be expansive in our thinking—let’s explore some of these topics.” AI can come in and work with us in that respect, but I am a little less sure about whether there is the time and the capacity to do that within schools in the current system.

The Convener

That takes me on to my second question. If AI can produce work of similar quality for the learner, does that limit their intellectual curiosity and their desire to learn skills? What are the implications of that? Have you considered that at all?

Helena Good

I can bring context to the discussion. We are working with teachers who are delivering the creative thinking qualification. We have said to our teachers, “We want you to use gen AI when you think it’s appropriate.” We are seeing how that works in the classroom. A young person who is researching an idea will sometimes use AI to come up with directions and ways in which they can create a survey to find out about it and to generate feedback. That becomes particularly interesting with creative thinking. For some reason, many of us rule ourselves out as creatives. Somewhere along the line in our education, someone has told us that, because we cannot draw a bowl of fruit, we are not artistic or creative.

That is me.

Helena Good

We have a whole swathe of young people who do not see that as a skill. If the World Economic Forum is predicting that as one of the skills of the future, we all need to see ourselves as creative. What happens in the qualification is that the distance between having an idea and seeing that come to life can sometimes seem quite vast for people because they feel that they cannot draw.

We are seeing our young people putting in prompts. For example, one of the challenges is for them to create a wellbeing space in their school and activate it. One of the learners had an idea about origami and getting people to come in and make something in the space. She could make one origami model, but to create that and make it look as though it was in the space would have taken a huge amount of time, so a teacher worked with her to put in a prompt to Bing—one of the gen AI tools—and, within seconds, she had visuals that she could take away, begin to analyse, talk about and move back and forward between. It is a case of young people having the ability to use it, to filter it, to look at it in the context of, “Is this right? Is this what I think I need to move?” and to then go back and trust their own human intelligence and their own operating systems.

Professor Robertson

I have some examples of how my university students have been using generative AI in my course, which goes to the point about what sort of skills they are using it for. They are using ChatGPT as a brainstorming partner. They are using it to help them to work out how to tailor text for different audiences. They are using it to summarise articles to help them to understand things that were difficult for them at first. They are using it in order to get an idea of what the literature on a topic looks like. They are using generated images to illustrate their teaching materials—they are computer science students, not artists, so it is fine if they want to use Bing, as far as I am concerned. Sometimes, they are doing computer science-specific tasks.

09:45  

I am really pleased not only with how the students have started to use such tools to extend what they are capable of, but with how critical they have been in working out whether and how their performance is better when they are using AI than if they were simply using their normal brain. They are aware of where their brain starts and ends and where the AI can extend it, and I think that that is an example of where we will end up. Obviously, how that looks for my group of learners will be quite different from how it looks for Helena Good’s or Chris Ranson’s. I think that it will be part of an educator’s job to work out what we expect AI to do with a young person, depending on their age and stage and the tasks that they are working on.

Does anyone have anything to add?

Ollie Bray

I have one thing to add. The first time that I was in this room was many years ago, when I was here as a geography teacher looking at technology in education. I bring up that story because, as a geography teacher at the time, I knew whether something that was handed in was the pupil’s own work, because learning is a process, not an output. When something is handed in, because you know pretty much all the young people in the class and you have been with them on the journey over the year or the two years of the course, you will have a rough idea of where all those different young people are.

Again, it is a case of thinking about how we assess and measure that process and work with young people to develop those skills rather than—to go back to Chris Ranson’s point—focusing so much on the output.

Chris Ranson

I think that, alongside that, we will need to change the way that we do assessments. Before the summer, I had pupils telling me, “I’ve been writing essays for English. I typed the question into ChatGPT and it spat something out. I handed it in and got it marked and it was fine.” Quite a number of pupils told me that. In our current system, that is cheating; in that scenario, there has been no decent learning. Perhaps there is an opportunity to bring in things such as vivas, like they have at university. We could use questioning techniques to see how the pupil is getting on.

At the moment, if I have a group of pupils and one of them decides to start using this new technology, I will notice that, for various reasons. As a physics teacher, I will probably notice because one of the dangerous aspects of the current iteration of the technology is that although it can give you answers for various maths and physics questions, it will also hallucinate—it will come up with imaginary answers. Worse than that, it will explain how it got to the imaginary answer. If I had a random answer machine and I knew it was 50:50 whether it was giving me the right answer, I could work with that, but if I had a machine that gave a really credible explanation of how it came to the wrong answer—I have seen that hundreds of times—that is easy to spot, as it is when something has been picked up randomly off the internet. Similarly, there will be ways of spotting that in different subjects.

However, issues will arise when pupils enter into a new year with a new teacher. Let us say that, at the start of the year, they build their own chatbot. That sounds complicated, but it is a very simple task, and it will become simpler. Apps to do that will come out in the next six to 12 months. I built a chatbot. It took me about 20 minutes, and I am not a tech guru; I do not have an information technology degree or anything like that. I went through a simple process whereby I made a chatbot that can mark first-year science investigations. I could also have made a chatbot that represented the understanding of a 15-year-old in Scottish education. If I made that at the start of the year and used it consistently throughout the year, it would be trickier to spot that pupil.

There will always be bad actors and people who are trying to get around the system. We need to come up with new assessment tools that are aware of the full picture and of how much nuance there is in this new landscape.

Liam Kerr (North East Scotland) (Con)

Good morning, panel. Chris Ranson, I want to ask about exactly the same point but look at it from the other end. Presumably generative AI reflects its source material. How will educators as a general category—and Professor Robertson might wish to come in after this, too—ensure that learners understand that the source material might be skewed and, therefore, treat the outputs with sufficient caution?

Chris Ranson

I have run four focus sessions at Dunblane high school, the very first of which was on this topic of reliability, not being able to trust these things and what I called the black box of large language models. Each of these tools is built on a dataset. A few are open source, so, technically, people can see where they are getting their data from, but with many of them it is a bit of a black box. Perhaps someone with more technical expertise should talk about that.

At this point, I do not feel comfortable recommending pupil use of generative AI in the classroom; Indeed, I think that the companies have imposed age restrictions on these tools. I have not been telling teachers to get their pupils to do this, that or the other or to come up with new ways of doing assessments with generative AI specifically for pupil use. I have not said anything about that.

I am sorry—I have rambled a bit and I have forgotten your question.

It was about questioning the source material.

Chris Ranson

Yes, the source material. There are tools in that respect. The last time that I was here, I met Professor Ken Muir, who has been working with a company that is looking to mitigate the dangers arising from hallucinating and picking things out of bad source material.

I think that I will just leave it there. Currently, I do not feel comfortable with it—I do not trust the source material. However, I will pass that over to Judy Robertson.

Professor Robertson

It is a great question. People should absolutely not trust what generative AI comes up with, because it is inherently unreliable. That is a feature of the technology behind it. Because it is statistically based and not transparent, you cannot ever rely on its producing something that is true all of the time. That is exactly why we need to teach AI literacy in schools; we have to make people very aware of AI’s limitations and why they should not trust it.

It is possible that there will be technological advances and that AI will become more reliable, but there are also human bad actors who might be deliberately using AI to generate misinformation. I think that we all know the dangers of misinformation and disinformation and know why our learners need to be aware of these things and have strategies against them. In a way, that is what education is moving towards—giving learners the critical skills to always be thinking, “Is this information true? How can I tell?”

Helena Good

In the creative thinking qualification, the fifth learning outcome is evaluation. In that part, we work with our young people on the work that they have produced and ask them to communicate their product by saying to them, “Tell us a story—make us care.” Very often, the evaluation is that AI enables you to tell a story, but it is your human intelligence and empathy skills that enable us to care. That is a really important part of the experience for young people; they are able to look back and see that if you put the same things in, you get the same things out, but it is your human context, your human intelligence and your understanding of your own operating system that enable you to communicate with empathy and context.

Liam Kerr

I am very grateful to you all.

I will now move to Ollie Bray. Obviously, you can answer that question if you wish to, Ollie. However, on a slightly different issue, the ability to put into practice the skills that the others have talked about will be impacted by access to IT. Is there not a risk of a digital divide emerging, with certain groups having access and gaining practice and other groups not? If I am right about that, how do we ensure that the use of AI does not, perhaps inadvertently, widen education gaps?

Ollie Bray

You have asked two great questions. I will pick up the first one—I promise that I will be very brief—and then pick up your other question, on devices.

Judy Robertson has just described one of the solutions with regard to AI literacy. You could call that critical literacy, data literacy or internet literacy; we have had different paradigms for these types of literacies over time. We need to double down on that because, as the tools get more and more sophisticated, we will get greater information and misinformation will get added in, which will cause difficulties and confusion, particularly for children and young people.

As the famous games designer would say, the solution is in the problem; if technology is the problem, the solution might be there, too. In an interesting exercise that we try to encourage with the teachers we work with, you create a piece of generative AI and the young people analyse it and look at where the misinformation is, using a variety of different sources. It is not a case of saying, “This is inaccurate,” or, “This is accurate”; we get the young people to examine it critically, as we would have done with a newspaper article in the past or as we would do with a web page that has just been generated.

A slight twist to this, which might come up in questions later, is that we have to be wary of very young children who are growing up in an AI-driven world, who speak to smart speakers like Alexa and Google Home all the time and whom we know start to build up trust in AI from early on. They do not have the wisdom to apply critical literacy to start with, and we need to think about that important point as we take our programmes forward.

As for your other question, Mr Kerr, you are absolutely right. If you want to teach responsible use, you need the technology to do so. Getting kids to imagine the technology and telling them to do or not to do something with it will not result in an authentic learning experience. I am reminded of the early days of internet safety and responsible use programmes, when we would do a lot of work on privacy settings with children and young people. Of course, social media was blocked in schools, so you had teachers standing up in front of classes, describing things or showing screenshots of privacy settings. It was completely abstract to the children; you need the technology to make that sort of thing work and get young people to understand the tools.

I think that you are right about the digital divide in this respect. We are seeing across Scotland big differences in the roll-out of technology in education. There is a variety of different reasons for that, some of which are complex, but I think that, if we want equity, we need to try to level the technology playing field.

If no one else wants to come in, I will hand back to the convener.

Thank you very much. I call Michelle Thomson.

Michelle Thomson (Falkirk East) (SNP)

Good morning, everybody. I have a couple of areas that I want to look at, but, first of all, I want to ask quite an open, exploratory question.

To what extent is the issue here the requirement for skills that enable young people to use AI effectively—you have started to allude to some of that with regard to the different ages and stages of learning—and to what extent is it knowledge? My personal view is that acquisitive skills and curiosity will be utterly vital, because it is only through curiosity that young people will be able to learn the skills to interrogate and question things. However, you are the experts and I would appreciate hearing the views of Ollie Bray and perhaps Judy Robertson on that question, first of all, although I know that you will all want to come in.

Ollie Bray

I pretty much agree with you. I think that this is about skills, and curiosity is incredibly important in that respect. It is also about knowledge, though, and we cannot separate the two things. My worry is that, when we think about AI education, we think about the wrong types of knowledge; we sometimes become fixated on the workings and technicalities of the AI system. That is not to say that that is not important, but sometimes young people do not need all the detail.

What young people need is a knowledge of ethics and what is right and appropriate in society. I do not think that it is a simple question of knowledge or skills, but you probably know that anyway. Curiosity is important, but we must ensure that young people are engaging with the right types of knowledge.

Michelle Thomson

I am aware that our Scottish AI Alliance is underpinned by ethics. It is a key part of the framework, although one can then argue, “Well—whose ethics?”, which is, of course, an entirely different discussion.

10:00  

Professor Robertson

I absolutely agree that curiosity is very important; indeed, it is something that we should value in the human education system. I kind of agree with Ollie Bray and kind of do not. Skills are important, and knowledge is important, too, but where I disagree is that I think that you need to know enough about how some of the AI works to be able to understand the ethics of it. I say that partly because I am a computer scientist and think that everyone should know these things.

More seriously, I would point out that we have recently done some research, asking children what they understand about artificial intelligence through their interactions with Alexa. Because Alexa seems like a person—indeed, it is designed to be like a person in the way that it gives you answers—the children tend to overattribute intelligence to it. They are worried about trust issues and so on, but because they do not have any technical understanding, some of their fears are misplaced. However, there are other trust issues that they should be worried about but that they are simply not aware of.

We need to work out an age-appropriate level of understanding of how AI works for the different ages and stages and the best way of putting that across. That is the knowledge that is important. However, it needs to be kept updated and, because of that, teachers’ knowledge needs to be kept updated, too.

Michelle Thomson

I know that Helena Good and Chris Ranson want to come in, too, but on that point about age-appropriate use of generative AI, in particular, I would appreciate your thoughts on key roles in that respect. It was mentioned earlier that the applications themselves have some controls in place, but I am interested in how we enable youngsters at different ages and stages to develop some of that thinking and trust. Indeed, how trust develops in young people is a critical issue. I would appreciate your thoughts on that, Professor Robertson, before I bring in everyone else.

Professor Robertson

This is a huge area because of child protection issues. The limit for a lot of the tools is 13 years old, mostly because of US legislation. There is a real danger with generative AI, because it generates images, which is why I was pleased to hear what Chris Ranson had to say about that. I do not think that children should be using these things unsupervised. Certainly you could do it with a teacher, but there is quite a large child protection risk with regard to the generation of images.

I am not sure what the technical solution to that would be. It is good for children to have practice in a guided way. As Ollie Bray has suggested, I think that we need to help and guide children through this in an age-appropriate way, in the same way as you teach children how to cross roads. We cannot stop them crossing busy roads, so we need to teach them how to do it, and it is the same with these things. However, there is a role for regulation in that respect. That might well be beyond the remit of this particular committee, but it is something that the Government needs to look at.

Helena and Chris, would you like to come in here?

Helena Good

When we look at what the World Economic Forum was predicting 10 years ago in terms of future skills, we see that it was talking about critical thinking, problem solving and creativity. That showed that this was already part of the landscape—we could see it coming. The fact is that these sorts of skills are uniquely human, and the issue is how we develop them and how our young people recognise them.

The knowledge part assumes that a teacher is a sage on the stage. Particularly with lots of stuff around project-based learning and situations where AI can help with personalised learning, we are asking our teachers to move to becoming coaches and mentors. That is the work that we are seeing in the creative thinking qualification, where there is no fixed outcome. The young people are curious, so they are set a challenge, and, as they undertake that challenge, their curiosity drives their desire to find out more about the ecosystem of a forest so that they can build a theme park connected with it.

I do not know whether any of you have teenagers, but when you talk to young people about knowledge, they will hold up their phone and say, “I have it all here, Mum—I know it.” That is a challenge. At that point, you have to ask, “How do we excite you about education? How do we give you something that makes you interested and curious enough to go and speak to people, find things out and come up with a solution?” AI presents an opportunity to support that sort of thing. A fundamental shift is going to have to happen, but there are ways of doing that.

Chris, does what Helena Good has just described play into your earlier points about assessment, measurement and how things are going to radically shift?

Chris Ranson

I agree 100 per cent with what has been said. I will just add that there is a danger here, but it all depends on how we approach it. There is an opportunity here, but if we approach it in the wrong way, we might end up with a tail-wagging-the-dog situation, in which we prepare young people in schools for an AI world. Perhaps there is something to be said about rekindling the love of mind expansion and fulfilment of knowledge, and this might be an opportunity to do that.

In the briefing notes and various articles that I have been reading, there has been a focus on concerns about jobs and about fitting into an AI world. In that sort of dystopian model, we make children cogs in an AI machine instead our fitting AI to us as humans. However, some of the aspects of how the technology has been trained, which Judy Robertson will be able to talk about, suggest that it has been moulded to the way in which we think and talk—for example, natural language processing and such things—and we do have an opportunity to go down that route.

As I said earlier, I do not want to focus too much on fears and concerns about AI. However, bad actors have been mentioned, and I would just note that, at the last Goodison group meeting that I was at, one of the young people who was there said, “Wouldn’t it be great if we had AI sitting there in the classroom to chat to if we didn’t have time to ask the teacher?” There might be space for that at some point, but—and this bounces off what has been said before—because of what is currently possible with the technology, that is not something that I would want necessarily to bring into the classroom.

I did a session at my school on negative-use cases to try to prepare teachers for what a pupil could do with this in a worst-case scenario. I am not trying to be alarmist when I say this, but with some basic prompting—the way you interact with the AI tool is that you write a prompt—you can, depending on how you word the prompt, manipulate the AI into disregarding its current safeguarding procedures. For example, I was able to give both the Microsoft and Google AI tools—that is, the mainstream ones that are available—a list of chemicals from the chemistry department and it was able to tell me how to make a weapon. I also got it to tell me inappropriate jokes. All of the things that you do not want pupils engaging with are currently possible without any real effort.

I do not want to sidetrack things, but I would also highlight the issue of mobile phones in schools. In Spain, there was an issue with boys taking photos of girls in the school and then using various apps and pieces of software that remove somebody’s clothing from a photo realistically.

I feel as if I am walking a tightrope here. I want to present this as something that will have massive opportunities if we adjust the way in which we do education, but there are also significant threats that we should be aware of if we simply try to slot it into what we do at the moment.

Michelle Thomson

I agree with you, and you have neatly led on to my final question. How on earth do we begin to tackle the challenge? I am mindful that, as parliamentarians, we need to support the education sector to keep up with the pace of change, which is startling and almost unfathomable at this point.

Professor Robertson

I spend quite a lot of time thinking about that issue. Teacher professional learning is really important. Particularly with AI, it is not a one-shot thing. It is not like when we learned how to use Teams during the pandemic—this is on-going and it keeps changing all the time.

We therefore need forms of teacher professional learning that respect practitioners’ ability to be creative and to innovate pedagogy, because they will have to keep doing that as the technology keeps changing. However, the teachers need support and it needs to be regular.

With data literacy, which is related, we have knowledge-creating communities through which university staff and teachers across local authorities work together. The teachers come to the university and learn about activities that they could do and then go away to try it in the classroom and come back for professional reflection with each other. What is great about that is that I have learned so much from what the teachers have tried and shared. They also seem, particularly in primary schools, to naturally share it with their colleagues, not necessarily just in their school but across clusters.

There is something important there about respect and equality between the teachers, the university and whoever else is involved. It is also about giving teachers enough time to be able to engage with the issue. We know that time for professional learning is always a problem in education. There are models for doing it, and we can get there, but it will be a lot of work.

Helena Good

First, we cannot do this alone and we are not on our own. Every industry is trying to grapple with the issue and the implications for the future workforce. More than ever, this is a call for education to be as collaborative as it possibly can be. We are working with the national health service and with some of the creative industries to look at the implications of AI for their work and to learn from the challenges and questions that come from that.

In the work that we are doing with Education Scotland’s digital skills team, we are looking at creating a digital landscape—it is a simple web page—which would enable us to maintain contact, provide updates and offer opportunities for employers across industries to share where they are and the insights that they are gathering. Sometimes, we go into our own little swim lane and think that the issue is our unique problem, but it very much is not, as we all know.

Ollie Bray

Interesting work is coming out from the US from an organisation called Common Sense Media, which works internationally and which is looking at matching AI apps to the appropriate age and stage, as we do in the video games industry with the PEGI—pan-European game information—ratings. That is an interesting area to explore and keep an eye on.

A linked point is that teachers are always in a very tricky situation with the issue. We know that some things in life are age-appropriate—such as video games, films or artificial intelligence tools—and therefore we know that children should not be exposed to these technologies but, frankly, they are. Teachers are therefore in a very difficult situation in trying to manage that. A lot of it comes down to speaking to children about the issue.

We have been through several paradigm shifts in that regard. For example, we now speak to children about content that they have seen on television, and we know that a lot of teachers now speak to children about content that they have played in video games. That did not happen 10 years ago, but the paradigm has caught up. We now need to get into the mindset of speaking to children about content that they have created through AI tools. Those are simple things that we do. We know that one thing that teachers do well in Scotland is knowing their learners and speaking to children. It is about making the link to different technologies.

I support what Judy Robertson and Helena Good said about the importance of teacher professional development. We need to make time for that. To be frank, we have to use curriculum reform as a springboard to try to get some of this important stuff into the curriculum. When we developed curriculum for excellence a number of years ago, some things were mentioned as things that young people need to know, but those things are probably more important than ever. The use of technology is one, and AI would be in that bucket. Creativity is another, and it is more important than ever—all the evidence points to that. Learning for sustainability has been in the curriculum, but we need to double down on it. Then there are things such as equalities—we know that we need to continue to get better at that—and the UN Convention on the Rights of the Child.

When we think about curriculum reform and education in Scotland, we need to make sure that we have those cross-cutting themes in there, with appropriate professional learning for teachers to ensure that they remain upskilled on these really important issues for society, so that they can support children and young people.

Chris Ranson

When I approached the leadership team at Dunblane high school, to their credit, they gave me free rein on how to approach the problem. It is outwith a lot of people’s understanding and area of expertise. The approach that I came to with the school was to try to move everyone through a four-stage process, involving awareness, understanding, utilisation and synthesis—synthesising that into policy. Obviously, that is all at a local level within the school. I have built a website on generative AI for educators, which follows that four-stage approach. I am slowly populating the website, primarily to help my school but also to help whoever else might be helped by it.

10:15  

I have found that you can get a lot done by focusing on awareness and understanding. I believe that the utilisation of the technology has not come to fruition yet and that there are still massive amounts of work to do to make it something that will be useful in schools. In the meantime, to deal with the immediate threat and to grasp the immediate opportunities, just making teachers aware is a massive and important challenge. I did surveys with the staff at the start of the year on how they felt about it, and that reflects a similar mindset to that in pupils—there is a mixture of caution and curiosity.

As I see it, my role is to try to shift people so that they keep their curiosity, which is brilliant, and think about how to use AI positively. The more that teachers see what is possible with AI, the more attractive it will become so that, when the continuing professional development sessions and training come up, they do not think, “Oh, I’ve got to learn another tool like Microsoft or Google.” Teachers have minimal time for that, and I would guess that the appetite for learning about new software or a new tool is at an all-time low since the pandemic. However, if teachers see something that is truly expansive and can truly change the way that we do education, it will be so attractive that they will want to use it. My intention, in the approach that I have been taking, is to make it look like that.

Thank you.

That is a great link as we move on to ask about a more positive approach to AI.

Stephanie Callaghan (Uddingston and Bellshill) (SNP)

I thank the witnesses for being here. The discussion has been incredibly interesting so far, and I am sure that we will have many more sessions on the issue as time moves on.

I will stay with what you were talking about, Chris, but switch the question round a little. Have you been thinking about how AI might help to reduce the workload on teachers, in lesson planning, monitoring pupil progress and so on?

Chris Ranson

I am happy to submit information on my most recent session, which was on 22 November and was all about positive use cases for AI. Again, I tried to go through the process clearly, to give the group of teachers an understanding of what is going on underneath the bonnet of the technology, and then jump into thinking, “Right, now that we understand it, how can we use it?”

There are massive opportunities in various ways, such as in administration, although you have to be cautious. I will not go through everything that I did in that session, but I submitted anonymised test scores and asked one of the tools to come up with some suggestions. When you do that, it is helpful to first ask the tool, “Do you understand what I have given you?” You can upload an image or document and ask it, “Do you understand what you are looking at?” I did that, and it gave me three incorrect statements about what it was looking at. There is a word of caution in that you cannot just jump into the deep end. There is the big thing about trust—you cannot just trust the tools when you are doing these things.

As we have talked about, it comes back to ethics. AI will write pupil reports. You could give it some very brief prompts and some ideas about a pupil. You could even have a table of all your pupils with comments for each one and perhaps their scores—I have done this. You would have to submit that anonymously—I have been stressing the general data protection regulation concerns—but you could get it to write a report based on an Excel sheet of data. Obviously, you need to read it, and there are issues with hallucinations and it coming up with things that do not exist.

I could go on and on about what we could do with it. Another example is lesson planning. There is a massive opportunity for cross-curricular education and to get out of our silos. I picked two random things and submitted the national 5 learning outcomes for physical education and for geography. The tool produced a lesson plan about learning about the world around us through the sports that people play in different parts of the world. It was fascinating. I am not a geography or PE teacher, so maybe that is not a unique thought and someone has done that, but it is so easy to come up with new and exciting ways to do education using this tool.

Stephanie Callaghan

It is interesting how AI can spark that interest that makes you think beyond what you imagine initially.

I am interested in what the other witnesses have to say on that issue, on the point about getting the ball rolling initially with teachers, and on the point about things developing naturally as AI develops and becomes part of life.

Professor Robertson

Chris Ranson summed it up nicely, so I will pass over to somebody else.

Ollie Bray

There is no doubt that AI tools will naturally start to fall into life. We have mentioned things such as smart speakers. I am sure that pretty much everybody here will have typed an email this morning, and maybe as you were doing that, you pressed tab, because it was suggesting the next words for you as you were going along. It is AI doing that—we see it in the systems that are already there in giving prompts. In some areas, the technology will naturally drip feed in.

Chris Ranson’s examples are interesting. We need to be careful that we do not enslave new technologies for old ways of working. The question is whether it is still appropriate to send a written report home once a year for parents when we now have the technology to support real-time reporting and AI can support us in doing that. How do we get behind the creativity of these things and make them work?

We do a similar thing in the programme for headteachers that we have been developing and that we will pilot in the new year. We use generative AI to help people work through the process of a risk assessment for a school trip. To be clear, this is not writing the risk assessment; it is getting the AI to come up with different prompts for the venue, to get people thinking about the issues, and then to generate the letter that will go with that. I just give that as an example—it is not about replacing things, but it can be part of the process.

I said recently in a podcast that I worry that, with some AI tools, we could end up in a situation where we are given a document, we ask the technology to summarise the document and we pass that summary on to somebody else, but that person does not read the summary—they analyse it with a tool and, as a result, we lose the nuance. We could probably all relate to that a little, because lots of documents that come out at the moment are summarised for us. At the moment, it is probably a person doing the summary rather than technology, but, if we are trying to move things forward, it is important that we get into the nuance and understand the background.

However, there is huge potential to not just reduce teacher workload but change the way that we work. That is the important thing, but we have to be imaginative around that.

Helena Good

We are not the only ones looking at the issue. You can imagine that, in the NHS, being able to reduce the amount of admin is incredibly attractive.

There is a real opportunity for us to use AI to encourage teachers to go back to what many of them came into the profession to do, which is about the joy of learning and not about ticking boxes. AI should, and hopefully will, help with those admin tasks.

To go back to the opportunity around personalised learning, a huge amount of data exists in schools around learners. There is an opportunity to use data-driven insights to create personalised learning experiences, whether that is just within a classroom or generally looking at the ethos and aims of the overall school in the context of its learners.

To come back to Stephanie Callaghan’s point about teachers learning about the issue, we are all learners at the minute, and with that comes huge vulnerability. We submitted to the committee a short film that we ran as CPD with Glasgow School of Art. We brought together teachers from across the country and placed them alongside experts—if that is the correct word—on AI in the creative industries. I sat beside a design and technology teacher who said, “First, I recognise what it feels like to be a learner for the first time in a very long time and I’m absolutely terrified.”

There is a danger that we become immobilised, which is what had happened to that teacher. We know what it feels like when we do not know the answers—we can decide to remove ourselves. However, we need to take up that call to action, which is what that teacher did. With the support of a mentor, she learned techniques to produce a campaign to stand up to tell a story. When we all move into that real ethos of thinking that we are learners, we can move from being immobilised to taking up the call to action to get this right.

Stephanie Callaghan

That is interesting. My next question is on how AI can support individualised learning. I am interested in that and the fact that it is almost about children and teachers learning together and learning from each other and valuing that. Teachers will perhaps become more like facilitators and will have an oversight role that is about keeping things respectful and keeping the ethics on board.

Professor Robertson

I was going to pick up on the point that Helena Good made about personalised learning. So far, we have talked mostly about general purpose generative AI tools that anyone can use. In the education space, there are also lots of commercial AI systems that are purpose built as tutoring systems. Their claims and promises are that it is personalised learning—every learner can learn at their own pace, as the system will adapt to how they are getting on if they need extra hints or more difficult or easier questions. That all sounds great, but there are a lot of dangers—actually I do not know that I want to say “dangers”, but there are issues that we need to consider, one of which is about privacy.

In this country, we respect children’s rights, and particularly the right to privacy. There is a problem with the idea of collecting data on children’s learning that could be shared with teachers or parents in the form of reports, for example. We need to ensure that we get children’s consent and participation on what sort of data is shared. There is therefore some promise from personalised learning, but there are also issues about how the learners feel about it. The learner should have autonomy to work with the machine and with the teacher, maybe in the role of facilitator that Stephanie Callaghan mentioned. However, it is the humans in the room who should be using the system—the learning should not be driven by the machine. That is one of the things that I am concerned about.

Helena Good

AI is a tool. Human intelligence should always override that and be celebrated and developed. On personalised learning, the Hayward review and the work that has been done on project-based learning, provide a real opportunity for personalised learning experiences to be opened up in the classroom. Certainly, that has been the evidence and experience that I have seen in the work that we do with project-based learning in the 35 schools that we work with. Our young people choose the topics related to that challenge. It is that curiosity that has driven them to acquire a qualification and get accredited and assessed in that format.

Chris Ranson

I agree whole-heartedly about project-based learning. However, from a practical point of view in a school, it again comes back to the difficulty of instigating project-based learning when we need to get through the curriculum to do an exam and to do this, that and the other. It is about the bigger picture.

Thank you.

Pam Duncan-Glancy (Glasgow) (Lab)

Good morning. It has been fascinating so far. I am enthusiastic about the role that AI could play, but I understand the risks. We have touched on some of them and colleagues might wish to drill down a bit more. I am keen to know your thinking about how we can ensure that AI supports equality and addresses inequality rather than exacerbates it.

10:30  

Ollie Bray

I know that Judy Robertson will pick up on this, as well. There is a lot in that question. Part of it is around access to technology. On the previous question, we did not touch on the power of AI-driven tools in schools to help children with additional support needs. For example, there are technologies now that can read screens to young people, provide feedback on work and provide support with spelling, grammar and punctuation—all those basic but important skills. All those tools are important.

The other part of that, at the other end of the spectrum, as we have already touched on, is the bias that exists within AI at the moment. ChatGPT, for instance, is fed by the internet from before 2022. We have already touched on the misinformation that would be in there. If we think about the internet and how human beings are represented in the internet, we see that most articles tend to be in English, a lot are generated from Europe or North America and it tends to be a male-driven environment. Therefore, there is a bias in the responses that come back from AI.

It is a big and complex area. I think that we need a strategic plan to cover the whole spectrum of these things, from tools that support personalised learning to making young people aware of the ethics, as well as the other aspects of support that we have already discussed.

I have not heard the point about bias put like that before, but I have heard about closing the bubble of information around people. That is an interesting angle.

Professor Robertson

I will extend what Ollie Bray has said and return to the idea of commercial tutoring systems that use AI. There has been research about systems that are deliberately designed to present information in a different way or to guide a student’s learning differently depending on particular characteristics, including their socioeconomic status, because it is known that, statistically, at a large scale, particular groups might not perform so well in, say, maths. We really need to take a view on using personal information about each student to tailor information to them in a particular way. My concern is that, although it might be possible for local authorities to make IT procurement decisions about such tutoring systems—which seem good and perhaps help with learning—those tools are not evaluated. If they have features that are systematically designed to mitigate bias, they might do something that we do not want them to do. We need a lot of scrutiny about such systems.

Helena Good

I can give some context to that with an example from one of the schools and some of the learners that we are working with at the minute. Very often, you have students in a class who, for whatever reason, look at a new project and immediately think, “I can’t—I can’t write and I can’t think.” However, we have seen interesting work with such young people, whereby they are supported not to think about it being a big chunk of work but to start with a prompt. The ability to generate the correct prompts for AI will be a key skill for our young people. That is about being succinct and accurate in the questions that they ask and to do so from an informed point of view. When our young people, who traditionally might have struggled with English in certain formats, narrow their question down to a prompt and work with the teacher to put that in, we see evidence of something that moves them from an immobilised stance of “I can’t” to possibly “I can”. That is where there is power and equity in the learning experience.

Chris Ranson

I do not have a lot to add to that. AI is a massive tool for differentiation, because we can ask it to explain such-and-such as though we are five, in high school, in primary school, in university or whatever. It is very good at taking complex topics and breaking them down. It is still a bit like typing back and forth with a chatbot at the moment, but, when the technology advances, I hope that it will be more integrated into our new way of doing education.

At this point, it might be appropriate to point out the survey that I did with school pupils on their interaction with AI. The vast majority, 70 per cent, of those who have used it say that their interaction is primarily with Snapchat—an app on a phone. From a pupil’s perspective, most of their interaction is chatting to what is called “my AI friend on a phone” instead of using ChatGPT, which came significantly lower in the rankings.

Pam Duncan-Glancy

I think that Ollie Bray started with the point about how young people trust such systems because some of them use them all the time. That shows the leap that needs to be taken to allow us to use AI in the way that we have heard. Are you aware of any work going on in Education Scotland or elsewhere on inequalities and mitigating some of what you described earlier?

Ollie Bray

Nothing is going on specifically around AI and equalities in Education Scotland at the moment that I am aware of. There is work that you will be aware of around equalities more widely and that has a technology lens to it. At the moment, we are not doing any specific work around AI and equality. On the follow-up question whether we should do some work on that, the answer is yes.

Thank you. Does anyone else have any understanding of work that is going on elsewhere in the system that could support development in this area?

Professor Robertson

Modern studies teachers and other teachers who work with children in other areas of equalities are in a good position because, if you learn to spot biases, which humans have, you can include the output of AI. It is one of those things in which working across the curriculum would be useful.

Ben Macpherson (Edinburgh Northern and Leith) (SNP)

Good morning, all. Thank you for your time and fascinating insights so far. Your points around the necessity of judging AI on its data source are so important. Recently, I heard somebody say that, even if the technology is perfect, AI will never be perfect because it is reliant on the data within. It will be so important to apply critical thinking to AI and the dataset that it relies on.

You rightly cautioned us with warnings. I have seen some reporting from the United States of America, where, in Silicon Valley, for example, there are schools where they do not use computers or tablets because they want people to learn wider creative skills with a pen and paper. However, it is also true that this technology is here and that it will be a big part of the future. You have made those points.

I want to ask some questions about utilisation. Chris Ranson spoke wisely about how there needs to be a sense that AI is something to be used but that we must not think about learning about AI; rather, we should think about how to train young people to fit into an AI economy of the future, because it will be a big part of the economy. How do we get the balance right? What skills are required to use generative AI, and when should we bring them into the curriculum? Perhaps Professor Robertson would like to go first. You talked about how, as a computer scientist, you think that everyone should learn those skills. When should they learn them and what should they learn?

Professor Robertson

I do not know the answer to that, but I can speculate. At the moment, in the technologies curriculum we have strands about understanding the world through computational thinking. Those skills can start in the early years and apply all the way through education. They are about knowing how computers work and how large statistical models work, for which a bit more mathematics needs to be learned. That is knowledge that children can learn and that we can knit into what is already taught in maths, computing and so on.

I think that there is also a set of skills to learn in relation to how to use AI appropriately and wisely. Responsible use of technology is already being taught to a certain extent in digital literacy. It all comes back to the core things that we want to teach: literacy and how to understand and evaluate information and sources will always be important. Character values such as we have here in Scotland, such as being respectful of one another, are also important. That is something that teachers are already great at teaching.

In the case of misuse in Spain that Chris Ranson mentioned, technology supported another way to bully people, but the way to the root cause of that is through talking to people about responsibility and respect. We are doing a lot of what we need to do already—we just need to find ways to adapt it to the AI context.

Helena Good

In the Organisation for Economic Co-operation and Development there has been a lot of work done on the subject, as we all grapple for answers. I will read a paragraph about skills. The skills that are needed are for humans to be able to

“collaborate with machines in a way that both amplifies their intelligence and celebrates their humanity.”

I feel that that is an interesting way to summarise the situation.

From work that has been done with Skills Development Scotland, we know the importance of meta skills. Our curriculum needs a fundamental shift so that the subject does not just make a guest appearance in the curriculum but takes centre stage in development of our curriculum.

It comes back to our recognising and celebrating what we are born with—our human operating system. AI is a tool that will enable us to amplify what makes us uniquely human, but it should not be the thing that leads the way. The World Economic Forum predicted this for our future workforce: it saw it coming and said that these skills need to be in our curriculum right now.

Ben Macpherson

So, we need creativity across the curriculum but with ICT skills. I remember that, when I was at school, we learned the basic operation of Mac and Windows systems. Do we need to teach ChatGPT in secondary 1, for example? I do not mean to pin you down, but where and when should we bring this teaching in?

Helena Good

First of all, as soon as you tell any teenager they cannot do something or not to do something, that thing becomes incredibly attractive. This is already happening, whether we want to believe it or not. Our young people are using AI creatively, so we have to look at where it sits. It goes back to the question, “What if we get it right? Where is the right place for AI to sit in the creative experience?” We need to ask our young people whether it helps them to tell a story better and whether it helps them to iterate and innovate better. That relates to the uniquely human parts. Instead of having a fixed guideline that could become redundant in the next 48 hours, we have to go back to what it is that makes us human. Where does that fit and support us to develop the skills?

Ollie Bray

We need to change the narrative a little bit. I do not mean that we should do that in a tokenistic way. At the moment we talk a lot about AI in education, but we should talk more about “education in the era of AI”—a term that I have shamelessly stolen from the Massachusetts Institute of Technology’s media lab.

In answer to your second question, since 2019 the media lab has developed what I consider to be a brilliant kindergarten to age 12 AI for education curriculum. It starts from early primary with hands-on experiences, so that young people, in developing creativity skills, interact with different types of technology. They learn about different types of AI that exist in the products and services that they would typically come across—for example, YouTube search algorithms and smart speakers, which we have already talked about. Although I have not checked recently, I understand that the media lab has updated that curriculum for when young people get older to include things such as generative AI and PromptCraft. All those things can fit in at various ages and stages.

10:45  

One of the dangers that we need to be careful of in Scotland is that sometimes we implement something and assume that it is being done, but, as the technologies become more and more prevalent across society, we need to update the curriculum all the time, because young people move on in their experiences and their thoughts. The curriculum can quickly become outdated, particularly when it comes to the basics.

I presume that in updating, we need to make sure that there is continuing professional development for staff in that space.

Ollie Bray

Yes—absolutely.

Thank you.

Mr Ranson, do you want to add anything?

Chris Ranson

I was chatting earlier to Judy Robertson about this. One of my biggest concerns is the potential to create what I describe as a cognitive crutch early on in people’s education. If we give total free access to a machine that can do a lot of thinking for young people, will they still learn the basic processes that everyone needs to learn? In America, things are being taken out of the classroom; I can see where that is coming from. Equally—although this is well outside my area of knowledge in psychology—we should be thinking about how humans learn and how AI can help that process rather than flipping it on its head.

I have a friend who works for a major AI provider, whom I bounce a lot of ideas off. That person said to me that AI has proved once and for all that rote memorisation is pointless. That is the kind of potentially dangerous thinking that can come from such a powerful technology when we go from the technology back to the human, rather than thinking about how to teach a toddler to speak the alphabet or whatever with some basic elements that AI could do better. Certainly, Bill Gates thinks that AI can teach the alphabet better than teachers can. There are ways that it can be used, but I am just offering a word of caution—it should be AI that fits in with us. I think that that goes along with what Ollie Bray said: I like the phrase, “education in the era of AI”.

Ben Macpherson

On education in the era of AI, what are the implications—not just for schools, but for colleges and universities? Is there, in your view, enough cross-thinking on Government policy on an AI strategy through connections between employers, Education Scotland and other Government departments? Are we collaborating enough in considering the next stages for our young people?

Ollie Bray

I will summarise, if I may. Read into this what you will. I think that the connections are there, but the collaboration could be greater.

Helena, do you want to comment on that? You were nodding in agreement with Ollie Bray.

Helena Good

Yes. I go back to the statement that AI will not take your job, but a person who uses AI will. I am not sure that I necessarily agree with that statement. It is important to recognise the job insecurity that AI brings for teachers and anyone else who is working in education. There is a real opportunity for AI to enable educators and to elevate them, because it will become more and more important that teachers can empower young people and that they can be educators, be curious and be innovators.

The insecurity that AI brings for all of us in the room, when we look at what it can do, has to be recognised: it goes across industries. There has never been a better opportunity to have us all on a level playing field. Let us come together and look for the answers that will have an impact on the future workforce.

Ben Macpherson

By the time every student who is in S1 at the moment leaves school, whatever age they are and whatever jobs they go into, they will need creative thinking, but AI will be in almost every, if not every, work setting. Are we moving quickly enough to equip our young people with understanding of AI, or do we need to step up the pace? Judy Robertson is nodding.

Professor Robertson

Yes. Step up the pace. We are behind where we should be.

Bill Kidd (Glasgow Anniesland) (SNP)

It has been interesting to hear about the different angles and directions that things are taking and will take, but forget about all that. This is the Parliament and we are politicians—[Laughter.]—and we have to think about making policy on AI.

Curriculum reform has been mentioned, which gives us an opportunity to stick our oar in the water while we still have the right to do so, before AI overtakes us. How good is current guidance for educators and researchers on how to use AI ethically and effectively? You have covered very broadly how AI exists already in our society, in particular from the education perspective. Is there enough guidance from the Government on how educators and researchers will use AI? Should the Government do that, or should it develop outside Government?

Professor Robertson

The ethics of use of AI in research at universities will be covered by institutions’ ethics committees. I am not sure whether that is so much the case for industry. As far as I am aware there are no guidelines for how to use AI ethically within education, so I would welcome Government guidelines on that, at the very least. The Government needs to take a position on the matter quite quickly. We, in the room, are the partners who could say something about that. The universities would be keen to offer advice, but I think that there is a vacuum at the moment, which is slightly worrying because things are moving so fast, as Mr Macpherson said.

Do you believe that there is something there, but not very much, and that it needs to be improved and expanded?

Professor Robertson

I am not aware that there is any guidance on the ethics of AI in education.

There is nothing at all. Okay. Helena Good is nodding.

Helena Good

We were lucky enough to be invited to Helsinki last year. We went to a school to look particularly at how project-based learning is embedded in the curriculum, and how that is made to work. It was fascinating. In the staff room we were introduced to the ethics teacher—every school has an ethics teacher, which I think is interesting. In the training of new teachers coming through, AI must be part of their experience and curriculum. Every teacher needs to understand it and have awareness of it to bring into the classroom to support our young people in the ethical questions. That will become more and more relevant.

That is very interesting.

Ollie Bray

I agree with both the previous witnesses. Scotland’s AI Strategy’s website uses the phrase “Trustworthy, Ethical and Inclusive”. It was published in March 2001 and it is a more generic policy. A lot has changed since 2001. Is there anything specific for education at the moment? There is a lot of stuff out there for education, but it is not driven from the centre and it is quite sporadic, although much of it is very good. I feel that we need to draw things together, and I agree with Judy Robertson that it would be sensible to do that sooner rather than later to make it work.

Chris Ranson

When I started thinking about this before the summer, I thought that, by the time we got to Christmas, the Government would have released clear guidance and I would be adapting everything to fit in with that. I am not trying to be flippant; it is just that it is not clear that there is anything for me, as someone who is looking for stuff to grapple with. If it was down to me, I would have a school in-service training day or something like that. It would include mandatory information that teachers need to know about the technology that exists now and about what is happening, so that teachers would be aware. You will find loads of teachers now who just have no idea.

Bill Kidd

I suppose that that begs questions about curriculum reform and how AI will impact on it. Some people are aware and are working with AI—even so, as Chris Ranson said, there are a lot of elements that must be worked through—but some people have no awareness or depth of knowledge, and are supposed to be leading young people. That needs to be impacted on, does it not?

Ollie Bray

I will make a point on that. I know that everybody here will be aware of this, but I think that it is important to say it. In the Hayward review, there is a recommendation on just what you are describing. It says:

“Establish a cross sector commission on education on Artificial Intelligence. As a matter of urgency, Scottish Government should convene and lead a cross sector commission to develop a shared value position on the future of AI in education and a set of guiding principles for the use of AI.”

Of course, we need principles before legislation; that is important. My worry about Hayward is that a lot of the discussion is around the Scottish diploma of achievement. That is important and there should be discussion about it. A lot of the discussion about AI is to do with assessment and cheating in exams, so that is a very clear recommendation that we should be taking seriously.

That is really helpful. Thank you all.

Just for clarification, Mr Bray, you stated that the strategy was published in 2001, but it was in 2021.

Ollie Bray

Sorry.

The Convener

There is quite a big difference in those 20 years. A lot has happened even in the past two years in the world of artificial intelligence.

We will move on to questions from Willie Rennie.

Willie Rennie (North East Fife) (LD)

I will follow up on what has been said about the Hayward review, which recommends a move away from exams, towards assessments. It is not clearly defined exactly how far that would go, but many people in the education community are concerned that, given the rise of AI, that would be a retrograde step and think that we should stick with exams being conducted in sanitary conditions, isolated from technology. Do you have views on whether that is right?

Professor Robertson

I disagree with the view that it would be a retrograde step. On assessment reform, we are moving in the right direction to be able to assess and value the sort of skills that Helena Good has been telling us about. A move to more invigilated exams would almost be a panicked step and is not what we want to do educationally. It might be convenient, but it will not take us to where we need to be.

Ollie Bray

There is no simple answer, because all of this is really complex. I agree 100 per cent with what Judy Robertson said. We need to understand that young people can complete what might be considered to be a traditional exam on a computer with no assistance from generative AI or the internet. That option exists already, so the notion of having completely handwritten exams is not palatable in 2023 and beyond. We have to be a lot more forward thinking.

There are ways in which technology can help us to develop approaches to assessment. That point is reinforced in the Hayward review. For example, there is a place for multiple-choice questions, which are already used for some of our national qualifications, and for computer-aided on-going assessment, which would reduce workload. There is also a place for completely reimagining what assessment might be, using some of the technology tools that we have been talking about.

11:00  

Helena Good

I, too, think that it would be a backward step. We have to consider what employers are looking for, which is very much about the application of knowledge. When a young person recently went for a job with one of the bigger tech companies, the first thing that the interviewer said was, “I don’t care what you’ve got; tell me what you can do with what you’ve got.” We need to prepare young people for those kinds of conversations. Sitting in an exam hall regurgitating a set of facts on a certain date of the year does not give them those kinds of skills. We should enable them to create situations in which their voices can be heard so that they can develop a story that enables them, when they stand in front of an employer, to communicate effectively and to talk about being able to deal with failure, being resilient and being collaborative. Those are the skills of the future workforce.

Willie Rennie

In her statement yesterday, the education secretary set out a move towards greater emphasis on knowledge in maths. Surely, that foundation of knowledge needs to be assessed independently. I completely accept your point about skills, but surely knowledge should not be undervalued in all this.

Helena Good

I do not think that it is being undervalued. I think that we are looking for a different way to apply knowledge.

Chris Ranson

I go back to what I said earlier about our examination system. Examinations are there to measure how well someone has attained whatever we are trying to achieve in education. Instead, they have become the target. AI is forcing us to look at the current system, but, aside from AI, there is another issue to consider. Is there an issue with people sitting in a hall to show some kind of knowledge? I do not see an issue with people being examined in that format if it is used appropriately.

Ollie Bray

I will pick up on the question about knowledge. There is no doubt that knowledge is important, and it is, of course, impossible to separate knowledge from skills. It is an argument that goes around, but the real question relates to what knowledge is important to young people, why it is important and what knowledge is being taught in schools.

I know from previous experience as a headteacher that the knowledge that young people learn is sometimes not appropriate and can be an inappropriate use of cognitive load. I remember going into a science classroom on many occasions and seeing children and young people labelling science apparatus. The argument for that is that young people need to know the names of science apparatus. I think that everybody here would know what a Bunsen burner was if I put one on the table, but you learned that not by drawing it and labelling its parts but by using that piece of equipment. We need to think about these things in a sensible way.

One of the challenges in our education and examination systems is that we quite often take a simplistic view of knowledge because it is easier for children and young people to understand. I will give another example. I have mentioned that I was a geography teacher. I have taught hundreds of children how a waterfall is formed, but what is taught in the system is not actually how a waterfall is formed. It is far more complex than that, but we quite often use a simplified version of that knowledge to make things simple. We need to address that in the next stage of education reform.

Willie Rennie

My second question is about assessment. During the pandemic, there was a big debate about the role of teacher judgment, Scottish national standardised assessments and national testing, and about whether producing league tables creates the right dynamics. We have discussed how exams affect the system, too. Can we use artificial intelligence to assess independently and introduce accountability into the education system in a way that will give us confidence and will not create all the negative effects of SNSAs?

Professor Robertson

That is a very difficult question, but it is an interesting one. That might be possible if you had a human in the loop—or you might call it having AI in the loop—but you definitely would not want to trust AI to do that by itself. If you could ensure that the AI would not be subject to human biases, it could be part of decision making. Designing such a system would require a lot of thought, but there are some possibilities in that regard.

Ollie Bray

I agree. I cannot see a technical reason why it would not be possible to do that. There are wider questions about how we use SNSAs. Everybody here will be aware of the OECD’s recommendations about going back to a sample-based approach rather than an individual-based approach. We have not completely bottomed out that argument. If we take a sample-based approach, it becomes far more possible to use technology, because we use a smaller scale.

Helena Good

It depends on what we are assessing. There are possibilities if we are assessing somebody simply by whether they have included the right amount of facts or how many times they have mentioned a certain word, but, for me, that would be a step backwards. It comes back to why I think the teacher’s role will become even more important, because their ability to assess work, to put context to it and to know the individual and the story behind them will become more and more important. Yes, there is a role for AI, but I do not think that it is the way forward.

Willie Rennie

I take your point that the role of the teacher is incredibly important, but when a dynamic is created in which people compete through the SNSA system—I know that the Government does not like to say that league tables are produced, but they are produced on the back of that—the relationships that teachers want to have with their pupils are corrupted. Is there not a way to ensure that policy makers, the Government and education leaders have the confidence that things are working without creating all those negative dynamics?

Helena Good

I can talk only in the context of our work with schools on the creative thinking qualification, which has Universities and Colleges Admissions Service tariff points at level 6. We have been able to create an assessment model that assesses creative thinking. I have brought along details of it today. The learning outcomes, which teachers use as formative thinking, are on a simple stamp, and we have created an app that enables teachers to assess creative thinking in that way. That is important, because you do not want your 17-year-old child to come home and say, “I think I’m a red.” They want to know that they have an A, a B, a C or a D and that that grade has validity in relation to their future steps into university. We have been able to do that by making the system robust and straightforward, which enables teachers to get on with what is really important: the learning and teaching experience.

Ross Greer, do you have anything that you want to ask the witnesses this morning? I have put you on the spot a bit.

Ross Greer (West Scotland) (Green)

I am interested in a couple of issues, particularly how the ethical questions that we talked about earlier marry up with what Willie Rennie said about exams, assessments and how we measure things in schools. Chris Ranson gave an example. It is one thing to be able to tell whether a pupil has used something such as ChatGPT to help them with something in an essay for which there is a right and a wrong answer—for example, the name of a historical figure or a date is either right or wrong—but, on much more subjective issues, it can be harder for staff to drill down and tell whether a pupil has used an AI system, even if they know the pupil well.

When we get into territory that is incredibly subjective, how can we produce advice on distinguishing between what a pupil has produced and what AI might have produced? There might be no factual right or wrong answer for you to be able to check the hallucination points that have been mentioned.

Chris Ranson

Subject specialists are the people for that. The position will be different in each department in a school, university or whatever. However, the long and the short of it must be that we need to change our assessment methods. If we are trying to make a formal assessment of how well a pupil has learned something, certain assessment methods cannot be relied on now. Knowing that forces you to consider other things, such as a viva, or perhaps pupils should expect to be asked questions about their work when they hand it in. All sorts of things could be done. I am sure that Judy Robertson has more to say on that.

Professor Robertson

As you said, there could be questioning. There could be a viva—an oral discussion—with the pupil about why they chose to make a certain argument and whether they thought about a certain issue. When students write code, I ask, “Why did you choose that design rather than this design? What were you thinking at this bit here?” Asking comprehension questions helps you to assess what the person understands, rather than just using what is on the page. That is time intensive, but it is a really valid way of assessing learning.

Ross Greer

On the point about time, I am interested in the thoughts of Chris Ranson, as a teacher, and of Ollie Bray, given his experience in the classroom. Realistically, there will never be the capacity in the system for a teacher to do that one-on-one assessment with every pupil, but there probably is the capacity for pupils in group settings to, in essence, cross-examine one another while being observed by a teacher, and that might raise red flags if it is clear that a pupil does not have a comprehension of what they presented.

Is there a way to develop such a system in a group setting in order to address workload issues? We can all envisage a system in which there is limitless capacity and, therefore, staff can address all such issues directly, but that is not the system that we have, and it is not realistic to think that we will ever have it. Is there a role for cross-examination by pupils and students themselves?

Chris Ranson

I think so. I know that people are working on tools to help with basic assessment so that, as pupils go through the year, they can tell how they are getting on when they do not have access to a teacher. I do not have much else to say on that, but it sounds like a good point.

Ollie Bray

I think that there is a lot of scope for that. I do not want to speak for Judy Robertson, but I suspect that the set of skills relating to peer assessment and having such discussions is quite important when young people get to university. It is really important that we think about how we use pedagogies and pedagogical toolkits in schools to develop peer assessment skills as young people grow emotionally and develop.

Professor Robertson

It is a good question, which shows good insight into how we could do this. That kind of group inquiry is really useful for formative assessment, when you are trying to assess what a child knows so that you can help with the next step. There is something called process-oriented guided inquiry learning, which is almost exactly what you described. The learners in the group have prompts that they can ask one another, which helps to guide their thinking, and the teacher observes and listens to the discussion so that they know where to take the class next. We probably need more formal things for the higher-stakes assessments at the end of the year, but we do not have to do so much of that.

Helena Good

We need to recognise and reward the process and move away from a focus on an end product.

Thank you.

The Convener

That is super. I thank the witnesses for their time and evidence this morning. A number of you have said that you have things that you want to share with us, so if you have anything physical with you, you can leave that behind.

Before we move into private session, I note on the record that I have received apologies from our deputy convener. That concludes the public part of our proceedings.

11:13 Meeting continued in private until 12:05.  


Previous

Attendance