
ADCET
ADCET
ILOTA Things - Episode 14: International Impacts of AI - Equity, Inclusion and Education for All
Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this episode, titled International Impacts of AI - Equity, Inclusion and Education for All, we're going to chat about AI and the impact on students and educators. In this episode we are joined by Samo Varsik and Lydia Vosberg from the OECD, who help us breakdown a working paper they authored on The potential impact of Artificial Intelligence on equity and inclusion in education.
More information including episode notes and links are available on the ADCET website.
Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe.
Elizabeth: Hello, and welcome from whenever, wherever, and however you are joining us, and thank you for your time as we investigate a lot of things, that is inclusive learning opportunities through AI. My name is Elizabeth Hitches, and joining me once again on our Artificial Intelligence Universal Design and Accessibility Bungee Jump are my co-hosts, Darren Britten.
Darren: Hello from Melbourne.
Elizabeth: and Joe Houghton.
Joe: Hi from Dublin.
Elizabeth: Today's episode is titled International Impacts of AI, Equity, Inclusion, and Education for All. And it explores from a global perspective what the impact of AI might be, both the positives and the pitfalls. This particular episode came about after I was exploring some work by the OECD, the Organisation for Economic Cooperation and Development, an international organisation that works to build better policies for better lives.
Their core aim is to set international standards and support their implementation, and in so doing, help countries forward a pathway towards stronger, fairer, and cleaner society. When I stumbled across the OECD working paper, The Potential Impact of Artificial Intelligence on Equity and Inclusion in Education, my co-hosts and I were incredibly interested. And I'm very excited and very grateful to be able to stay with us today as special guests are the authors of that very working paper, Samo Varsik and Lydia Vosberg.
Firstly, we'd love to hear about your work at the OECD. So we'd like to start by asking what is your role and what do you enjoy most about your role?
Lydia: So first of all, hi to all of you. It's so lovely to meet you and thanks again for having us here. So yeah, I'm Lydia. Hi from Paris once again. I actually currently work in a bit of a different division than the one I worked in when Samo and I wrote the paper. So I changed to the Center of Educational Research and Innovation, and basically, my focus has shifted a bit towards teaching and specifically the role of teachers in education and how that's potentially also changing. So we work a lot on new professionalism, more diverse profiles in teaching, emergent forms of schooling and how this all interlinks with how societies view education and how that's potentially changing in the future.
So I think it's quite interesting and what I like most about it at the moment is that it's really interactive and inclusive with different voices from all over the world. And it generates knowledge really about ideating about the future. So I think it's interesting to look into approaches like strategic foresights and scenario planning and all these kind of things you try to figure out what's happening in the future. So, yeah, it's quite tickling for my brain, and also, I still think it's very, very nice to work in that field because I really think that education is one of the best chances we have to build a better future.
Elizabeth: Thanks Lydia, how about you Samo?
Samo: Thank you very much, Elizabeth, for having me. It's a real pleasure. I work at the OECD as a policy analyst in the Directorate for Education and Skills, where I work in a project that focuses on equity and inclusion in education specifically. And if I generalize our work quite a bit, it basically boils down to two strands. The first strand is about providing advice to countries in and around issues of equity and inclusion in education, and this means that we produce outputs either in the form of long reports or shorter pieces of writing, or we organize peer learning activities among different representatives from different countries and academics and researchers and so on.
And the second strand is more research based, so that's where we try to keep up with the current developments in the knowledge around equity and inclusion so that we actually know what to advise countries once they start asking their questions, once they approach us with their challenges.
And what I enjoy most about the work, I think, is the diversity. So we often see that the issues are very similar in many countries, particularly around equity and inclusion, but the responses to those are different. And I think it's very rewarding to see that and then also to be able to point out to representatives in different education systems to the approaches that are taken on board in the 38 OECD countries that we work with.
Darren: Fantastic. Look, it's lovely to have you both here to discuss, and I think, as Elizabeth pointed out, when we saw your paper, we went, this is exactly what we're talking about. This is where all of these things are meeting and to put it into such a form that stresses the things that we're talking about and from a global perspective, we think is fantastic. So, look, I'd just love to hear what inspired you to put together this OECD working paper, actually exploring AI and its impact on equality and inclusion and what do you think, or why do you think this is so or such an important area for consideration with respect to AI?
Lydia: Thanks for the question. Yeah, I think I would say there were a couple of different reasons for choosing this topic, at least to my mind. I don't know if you would agree, Samo, but I think when we started thinking about the paper, we had a feeling that across the OECD there was a sense of embracing AI, especially for education, because it really is a promising tool that makes a lot of things easier and probably also because it brings a lot of fresh wind into a field that is a little bit reluctant to change quite often, especially in policymaking. But also, there were, and rightfully so, plenty of voices, mainly from critical theory, but also, for instance, from Education International, which is the global teacher union, so to say, that were warning about the unintended consequences of these technologies and some even demanding a ban. So, yeah, I would say the discourse, especially on equity and inclusion education, was pretty loaded and it was also quite an urgent question because AI was already everywhere on the rise and there were already schools implementing AI without any systematic oversight or regulation. So, yeah, basically we wanted to dig into that and organise the different arguments in a way that opens the minds of policymakers rather than choosing one side and being a bit radical about it a nd also perhaps balancing out a little bit the OECD's very positive approach to it in the beginning. So, yeah, we tried to not push so much in one direction or the other and just really explore it.
Joe: It's fascinating to me. I mean, we normally are talking and thinking kind of directly at students and teachers. And you're coming in at this macro level of governments and countries and stuff, which is a really interesting other space that I don't think we've really ever explored.
So how I mean, that's one of the things kind of, you know, you've got policy and research at these macro levels. How do you see those two connecting down to, you know, the teachers? And I mean, I saw kind of one of your tables is that the digital resources by the socioeconomic profile of schools and you've split that out by country and then you've got the kind of, you know, the well-to-do schools and then the disadvantaged schools and how much, you know, access they've got to digital and stuff. So, you know, the follow up question, I suppose, as well as how do you connect down to individual schools and learners is kind of how do you stay current? Yeah, because, you know, I mean, like published research is two or three years old. But AI is moving in weeks and months, not years, and that's a different kind of cadence, isn't it? So I don't know who's going to pick that one up, Samo maybe?
Samo: Yeah, thanks. Thanks for this question. It is it is a big challenge. I mean, you raise two issues. One is how do we stay current with all the developments that we see today, regardless of what's actually published in the academic research. And there I would say we meet with representatives from countries regularly, there are both formal meetings at the OECD governing levels where countries raise their questions, ask us, guide us on what are their pressing issues and where they need advice and also informal meetings where we where we discuss in a in a less formal settings in our own expert groups or working groups on issues that are that are pertinent. And then we also reach out to researchers and academics. I mean, many working papers, not this one in particular, but many are actually written in cooperation with academics. We organise many meetings with academics and chaperone meetings between policymakers and academia and research, including non-governmental organisations as well. So that's how we try to keep up with what's happening.
The other issue you raised that that academic research is often outdated, let's say, and I don't mean it as a criticism, it's simply I mean, due to the lengthy publication process, this is simply what's happening. And that's also, I'm afraid, to some extent the case of this working paper. So it was drafted in two thousand twenty three, twenty four and we all know that the field has moved quite a bit. So maybe I want to put a little disclaimer at the very beginning of this podcast as well, like for your listeners to keep this in mind as well.
We will try, of course, to raise concerns and mention things which we believe are still current and still relevant, but this field is moving incredibly, incredibly fast.
Joe: Yes, I mean, the stuff that is in the paper is still very relevant, you know, I mean, kind of when I read it through, I was kind of making lots of ticks and nodding and stuff like that. So, I mean, even the fact that, you know, it's kind of a couple of years kind of since it was put together, maybe, and the ideas were put together, they're still all very valid ideas. I just kind of I'm conscious that I'm seeing in my educational space is such a leap forward. But yeah, I mean, all the stuff in the paper is really, really good. And I would encourage the readers to follow the link and have a read of it as well and, you know, our listeners, because it's a very accessible paper in terms of being able to be read, you know, easily. It's not kind of one of those dense academic papers that you just can't get through. So, yeah.
Samo: Maybe just one thing to follow up on. Well, thank you. Thank you very much for this for this praise of the paper. We really did try to make it as accessible as possible, after all, our readership is not are not academics, primarily policymakers who often do not have this technique, let's say. But what I wanted to pick up on is whether the issues are still current, I mean, in a way, it depends on who you ask. So take, for instance, bias, which I imagine we will tap on quite a bit in the in the next in the next hour or so. You know, if you ask developers of AI tools or at least some of them, they'll tell you this is a technical issue, we just need more data., this will be solved quite, quite quickly. It's a it's, it's not a fundamental issue, it's a technical issue. Now, if you ask other people, they'll tell you, no, there is something fundamentally wrong with the AI models and there is there is there is more to it. It's not just we need more data, there is more to this.
So, there is a lot of voices in the discussion as well. That's my point and we try to balance the discussion a bit with this working paper as well.
Elizabeth: Thank you so much for those insights, particularly thinking about bias, and I'm still hearing a lot of colleagues in the education sector who may not have even started to really dip their toe into AI. They know it's out there, but they're mainly hearing about students using it to cheat and not thinking about all of the other aspects that are involved and possible with AI for more positive benefits.
Now, for those who may not be fully aware of what is what AI is actually capable of, I'd love to bring their attention to something that you have in this working paper, and that is that you make the distinction between three different types of AI tools in education. So you talk about learner centred tools, you talk about teacher led tools, and then also those institutional AI tools. Could you explain the difference between those three? and if you can, give us an example of what that might look like.
Lydia: So first of all, maybe the distinction, like first of all, we didn't really come up with it. It was already something that was done before, and it's also something that helps understand who benefits or who can use most of AI in the educational field. But it's a bit of a blurry distinction as well, because obviously, if teachers benefit from AI, that might also have benefits for students and for institutions.
But to give you a bit of an overview, so the idea of learner centred tools is mainly that they benefit students and that they design the learning experiences for students. So an example for that would be intelligent tutoring systems that can individualise the learning content based on a student's needs.
Then teacher led AI tools assist teachers in their instructional and administrative tools or roles. For instance, AI assistance for assessment and classroom management that can track students' engagement, although that's also something that can be tricky, but we can talk about that later when we come to the difficulties of AI.
And then finally, institutional AI tools aim at addressing broader institutional objectives, such as improving operational efficiency and managing admissions. So an example for that would be tools that can identify at risk students based on data and they basically are supposed to make schools run more smoothly or institutions run more smoothly.
Darren: We often talk about, and you touched on it there, as is the name of our podcast, inclusive learning opportunities through AI, where we're trying to discover ways in which AI can be used for more personalised learning. And you touched on that with that user-centred aspect that says, but in your paper, you also discussed that adaptivity in learning or personalised learning, and what are some of the benefits that you found that AI can offer to improve equity and inclusion?
Samo: Yeah, thanks Darren for that question. Indeed, we do try to map some of the benefits or potential benefits, I should say, of these AI tools, because at the moment they are often promised, not necessarily realised. And that's kind of also a point which we might touch upon later on the need for more research in this area.
But in any case, so I'll follow up the framework which Lydia outlined in regard to the benefits. So when we focus on learner-centred tools, some of the most commonly cited benefits are precisely in adapting learning, often also labelled as personalisation. So many of these AI tools are able to adjust content, base difficulty in real time, responding to students' unique characteristics, backgrounds, needs, performance levels, and thus potentially provide them with very tailored help and advice. They can also enrich content, so, for instance, if you take the example of some virtual reality or augmented reality AI-powered AI tools, then this can enhance the learning experience of students by providing them the opportunity to artificially or virtually visit historical sites, let's say.
And they can also assist learners with special education needs, for instance, so the classic example which is often used here is our tools that are able to caption and translate speech in real time and therefore help students with hearing impairments, for instance.
And now in regard to the teacher-led tools, again, I imagine most teachers who have used these ChatGPT or any other AI tools out there use them to curate learning materials, for instance. So they are able to create new worksheets on a specific topic relatively quickly without much additional effort from the teachers, let's say.
Many AI tools are also potentially able to assist with assessment, and this is both at the teacher level, but also at the more systematic level. So we already know that some education systems are using AI tools to assess standardized assessments, nationwide assessments, which are first assessed by some AI tool and then verified by human assessors.
Some AI tools also claim to be able to aid in the identification of some special education needs. So, for instance, in terms of dyslexia or dysgraphia, there are AI tools out there which have remarkable identification potential in this regard.
And then just to mention briefly the third category, which are these other institutional tools. So there, for instance, some AI tools have improved their algorithms to better, or more accurately, identify students who are at risk of leaving from education and training or drop out. They are also able to potentially help in the admissions to either higher education institutions or schools. Then finally, they might be able to assist school administrators with data-based decisions. We know that schools, school staff are often inundated with data that they collect at the school level, and these individuals often lack the statistical training to be able to gain the insight from the vast amount of data that they collect. And the AI tools, honestly, they are there, and they are quite good at analysing vast amounts of data and drawing insights from that. So that's another potential benefit that AI tools offer.
Joe: Yeah, I mean, I was fascinated reading through the paper, you know, and I mean, I'm kind of working in the AI and education space as well and I'm trying to stay on top of apps and things. I wrote a book about AI apps for education recently and stuff, but I came across some stuff that I wasn't aware of. I mean, you mentioned Google field trip, virtual field trips and arts and culture and stuff like brain power and stuff like that, so some really good kind of links and we'll put those links in the show notes as well, because they're really useful.
I mean, just to follow up, Samo, there, you know, since you put the paper together, perhaps we've seen the accession in February of the EU AI Act, you know, with the four levels of risk and education is quite high, isn't it, as a level of risk, so is that changing the game in terms of policy and, you know, the way that you are engaging with countries around AI? Has that has that kind of changed the discourse somewhat?
Samo: I think the OECD is in a different position. We have never been mandated to create a legal instrument at the OECD level and definitely not in the education sector. I would say our thinking as of today is not going in this direction.
But of course, I am well aware that other international organisations are building even legal instruments, so, for instance, there is a working group, as far as I'm aware, as part of the Council of Europe, which precisely has a mandate to think about a legal instrument that will regulate AI tools specifically in the education sector. So that's a slightly different approach to what the OECD is doing. We try to more provide guidance, inform policymakers, balance the discussion. So that's the approach that we are taking at the moment.
Joe: Yeah. And I mean, that makes that makes complete sense. So, I mean, a term that I haven't come across, which was mentioned in the in the paper, and I'll throw this over to Lydia, maybe, I don't know, is techno-ableism. Yeah. So can you tell us a little bit more about that and kind of why that's something that we should be worried about or concerned about?
Lydia: Yeah, for sure, thank you for the question. That might be a bit of a long answer because I think it's a concept that is debated quite a bit at the moment. So it's generally a term techno-ableism is generally a term that basically describes the ways in which technology can reinforce or perpetuate ableist assumptions that we already have in society, even if it's designed to help. So it refers to a tendency to argue that technology is a, quote unquote, solution for disability and that people with disabilities need to be or can be, quote unquote, fixed through AI rather than recognizing disability just as a form of human diversity, which it is. In that sense, technology can tend to ignore the social and environmental barriers that actually create inaccessibility and just focus instead on the individual and basically how to help individuals.
So that's how we describe it in the paper. But there is also a bit of another form of techno-ableism that we maybe don't mention so much in the paper yet, but it's something that's in the discussion now quite a bit, I think. And that is basically that AI assumes something like a, quote unquote, normal user and that it's also designed primarily for that.
So, I mean, Samo mentioned already a couple of ways on how disability can really lead to better inclusion, for instance, with speech to text recognition or other tools that help students with impairments to follow the same content as their peers. But it depends a little bit on the form of disability, though it can also disadvantage disabilities. For instance, like autistic students can be disadvantaged if you imagine an AI system that I already mentioned that earlier tracks student engagement, and it's usually based on things like eye contact, maybe body movement or posture. So an AI would assume that a, quote unquote, normal user sits still when they listen. But an autistic student might express attention and also engagement through other forms, like fiddling around or moving around. And then the AI could track that wrong and rate that person wrongly and differently and give the teacher wrong information about that student. So this is a way that is a little less visible maybe when we think about AI and it's just something to be wary about.
So I think it's still important to note that there are a lot of ways in how AI can actually benefit students, but it depends on the level and the form of disability. And also it depends a bit on the level how we look at it. As I said, it's great if we can help individual students, but if that means that as a society, we see disability as something to overcome, then we get into trouble, especially if we apply this in educational settings, where we really shape the way how people in young minds understand disability and topics around inclusion. So it's basically a warning that we should think of. I hope that was clear.
Darren: I think that it's a fantastic warning and it's one that we've been discovering as well that's been coming up and even here in Australia, like some of the exam proctoring tools that are assuming there's a standard user and somebody looks away from their screen and they're flagged for academic integrity or they close their eyes for too long. And the flip side of that, a lot of the AI tools are not accessible to screen reader technologies and things because the rush to develop them is meaning a lot of these tools are not being developed with accessibility in mind or built to certain standards.
So, you know, there's a divide, I suppose, happening there as well. And so I might just quickly follow up on that, you touch on in the paper this notion of there being a difference or drawing a parallel between the digital divide and AI divide in that space as well. So can you elaborate on that for us? What is the difference between the two?
Samo: Yes. Thanks, Darren, for this question. I might step back a little bit and talk about access more broadly, because I think what we try to do in the working paper is to actually caution against drawing a strong parallel between a digital divide and AI divide. So let me maybe step back a bit and explain why we do that.
So what we know, what we have known for a long time before the widespread use of AI tools is that there is something that we call the digital divide, which means that students who come from socioeconomically disadvantaged backgrounds more often lack digital resources compared to their peers who come from socioeconomically advantaged backgrounds, and by digital resources, I mean mobile phones, computers, access to the Internet and so on. And this is something that we have observed for a very long time already before the widespread use of AI.
At the same time, we also observe that schools who serve disproportionately large proportions of disadvantaged students, socioeconomically disadvantaged students, also lack digital resources to a larger extent compared to schools which do not serve such large proportions of disadvantaged students. The divide which is stemming from the socioeconomic background of students is to some extent exacerbated by the lack of access to digital resources. And this is what was labelled digital divide and already before the widespread use of AI tools. And this is something which we at the OECD, as well as in academia, by many other international organisations was viewed as a problem because, for instance, from a human rights approach where we have the mandate to freedom of opinion and expression through any media, regardless of frontiers, we might view access to the Internet, for instance, as a net good, as something that we should provide to all. And so from this perspective, bridging the digital divide is something that we all need to work towards.
Now, there are, of course, parallels with AI tools, for instance, some AI tools are not free, even those that are free have add-on plans which are not free, and so you can see that not all students or individuals can have access to them. So you can see the parallel, OK, well, there is a similarity between not having access to some digital tools and not having access to some AI tools. However, there is a significant difference, in my opinion, and that is that as the discussion stands today, I do not think we can view AI tools as a net good. I am not sure we are there in our discussion to say that we need to provide access to AI tools to all our children and all our individuals, precisely because we do not actually know which of those potential benefits are realized and which of those potential challenges are also realized. We know nothing about the long term consequences of AI tools, for instance. And here, this is why we, to some extent, caution against drawing a strong parallel between the digital divide and the AI divide.
There are other aspects to this, so, for instance, there are new voices, particularly in research and academia, which caution against drawing these parallels and moreover are drawing the attention to the possibility of the digital divide flipping. And that means that students who have access to resources and who come from socioeconomically advantaged background will continue having access to both AI tutors and human tutors, whereas socioeconomically disadvantaged students will only have access to the cheaper of those two options, and these researchers warn that the cheaper of these two options might be AI tutors. So, what we actually might see in the long run is the digital divide flipping, whereas advanced students will continue having access to both AI and human tutors, while disadvantaged students will only have access to AI tutors. OK, so that's just a really long segue and a slightly long disclaimer on the discussion to access.
Darren: That example you just gave, which is something I don't think many people have considered, which was why there's certain bits in the paper that we were trying to bring out. And that being one of them, because it's often overlooked in that respect that there will be new inequalities that will be brought about and some that we haven't thought of yet, and the paper certainly touches on some of those. So thank you. Thank you for that answer. Elizabeth.
Elizabeth: Thanks so much, Darren. And thanks, Samo, for really clearly illustrating just how complex this whole situation currently is and that there's so many unknowns. We don't know what the future holds, and I think it's really interesting how you mentioned that we can't just assume that AI is that net benefit. You know, we know that digital accessibility is a really positive thing for students in terms of inclusion equity, but we don't yet know that about AI. I think that's a really interesting point to make, we're still at the very beginning of this journey and there's still a lot of research that we need in this space.
So what I'd love to ask you now, thinking about not just the access that students have to that technology, but also the access that staff have to training across countries. One of the things that the OECD does really well is provide that global picture. So I'd love if you could share with us how varied is that access to staff, to training and what might be the impact for students in terms of their equity and inclusion?
Samo: Thank you for this question. I think access to teacher training around artificial intelligence and bringing teachers into the discussion on how, when and which AI tools to use are absolutely crucial points, so I'm glad you asked. Since the working paper came out, the new results from the Teaching and Learning International Survey, TALIS 2024, came out.
And TALIS administers questionnaires to teachers in lower secondary schools in 48 education systems around the world. As such, we now have a very good picture around the use of AI and the needs of teachers around AI based on representative samples from many education systems around the world.
On average, across OECD countries, 36% of lower secondary teachers, or about a third, reported using AI, but there is actually a great variation among countries. In France, for instance, 14% of teachers reported doing so, while in New Zealand, the share stood at 69%. In New Zealand, more than two-thirds of teachers reported using AI.
The questionnaires also ask whether AI for learning and teaching was part of professional development of teachers, and on average, across OECD countries, 38% of teachers in lower secondary schools reported that artificial intelligence was part of their professional learning activities in the 12 months prior to filling out the questionnaire, so basically in 2023.
And then the last statistic that I would like to fish out for you is around the need for professional learning activities. And actually, more than half of lower secondary teachers, on average, across OECD countries, reported that they have a moderate or high level of need of professional learning activities around the skills for using artificial intelligence in classrooms.
Now, we can also look at the composition of students in the schools where those teachers teach and see whether there are any differences in the reports between teachers, for instance, who teach in schools with high shares of socioeconomically disadvantaged students or in schools with high shares of students with special education needs. On average, across OECD countries, we actually do not see any statistically significant results in the need for professional development in AI based on student composition, based on socioeconomic student composition and special education needs student composition. That said, there are countries or education systems, rather, where lower secondary teachers in schools with high shares of socioeconomically disadvantaged students or with high shares of students with special education needs do report a greater need of professional learning activities around artificial intelligence. So I think we have some great stats in terms of results, and I do encourage researchers and academics, but also policymakers to dig deeper into the data, which is, by the way, publicly available, and draw more targeted insights for their own education systems.
Darren: Thank you so much, Samo. And I think that's a really good summary, and I would encourage everybody to go and have a look at the TALIS report and the findings there, and we'll post links to those onto the show notes for this episode. It's fantastic just to see the comparison between needs, schools, teaching staff, development, and looking back, as you're saying, on the 2018 and seeing that change that AI has brought in there, and just how much that we need, I suppose, and that academics and teaching staff in general are asking for, you know, for that need for development to showcase good use of AI, etc. but I won't get things bogged down too much in that, we could be here for another two hours just talking about that data and still not fully break all of that down, but thank you so much. Look, I'll throw over to you now, Joe.
Joe: Let's switch gears a little bit. You know, we're two and a half years into mass knowledge of AI now. How are we doing? You know, you've got the big perspective, you've got the global kind of perspective on this. How are we doing in terms of equity and inclusion? and, you know, what can the listeners of this podcast, who are primarily probably educators, you know, working at the sharp end, if you like, from your higher level perspective, I suppose, what can we do to better support equity and inclusion given everything that's going on now from your perspective? And it might be interesting to hear from both of you on this one, because I'm sure you've got different takes on this. So I don't know.
I'll throw to Lydia first. Lydia, what's your thoughts on this?
Lydia: Yeah, that's a good question. It's also a really broad question. So I'll try to think about it a bit while I speak. I think we're a bit at a turning point at this moment because we have seen in the past year and, yeah, I mean, as you said, that's changing quite quickly and there are lots of new developments constantly that AI really has a potential to make inclusion, to make education more inclusive. But I wonder sometimes to which degree that's already intentional and to which degree we're also just reinforcing the same inequities that we're trying to fight or trying to change. So, yeah, I think, as I said, I do believe that the fresh wind it brings into education is really welcome and to my mind, the kind of factory based model to schooling that we have at this point is a little outdated and AI has started to break that open a little bit, which is great, I think. But I also believe that a lot of these tools aren't really created with equity in mind. And especially since lots of them are coming from the private sector, they're mainly created to scale, to sell and to generate profits, so that means that the priorities of tech companies mainly rather than the priorities of the public can outweigh actually the needs of students. And then education becomes a bit more of a product than actually a public good. So I think that's something to be really worried about and I don't know to which degree educators can necessarily change that, but I think it's them who have the most insight into pedagogy and into education, so it's really a question of how we can kind of empower them to be a bit more at the table.
Joe: You clearly talk to lots of people at all different levels and in and out of education and stuff. Do you have a sense at all, or do you have a feeling that there would be the possibility of, I don't know, maybe diverting specific funds to some of the developers or producers of these AI tools to kind of focus a little bit on more on the equity and inclusion aspects of this rather than just chasing the money? Because, I mean, that's what's happening, isn't it? You know, that people are putting these AI tools out to scale, like you say, to scale and to make money. But are there any are there any kind of like central funds, European funds? Does the OECD have a sense of, you know, kind of could we could we divert some resources into this area?
Is that is that I don't know, I don't know whether that's even a proper question or not.
Lydia: Yeah, sure. For sure it is. Actually, at the EU level, I'm not sure if there's a fund like this, maybe Samo you know about this, but I know of national funds that go into this.
There's, for instance, in the Netherlands, there's kind of a private public cooperation going on between it's called the Dutch National AI and Education Lab, as far as I know. And it's basically researchers, educators and also policymakers who experiment a little bit with AI in classrooms and consult educators how that works. So they try to find the best solutions for placing AI in the sense of what educators need, and that also relates to equity and inclusion matters because that matters to a lot of teachers. And they try and scale that. But yeah, that's just one initiative I know of. I don't know, Samo, if you have some other insights on that.
Samo: No, no, not really, to be honest. Sorry.
Joe: Well, that's something we're going to have to follow up on, I think, because that's a strand I want to pull on, you know. So you've given us the first point to go to there, Lydia, and we can maybe do some searching and see whether we can, because I think there's possibilities to get other people on talking about that.
Elizabeth:
Lydia and Samo, is there anything that we haven't asked you that you'd really love to be asked?
Samo: So I would maybe stress two points. One is related to the need of more data and more evidence in this area. This is something we have come across while drafting the working paper, and I believe it also holds true to this day. A lot of the evidence focuses on the impacts on academic outcomes of students, for instance. We know very little about non-academic outcomes, social and emotional well-being of students and how that is impacted by the use of these AI tools, by some of these AI tools. By virtue of things, we know virtually nothing about the long-term consequences, but we do need policymakers, academia to create longitudinal studies which will follow people over time and be able to measure these long-term impacts or medium-term impacts.
A lot of the research is focused on English-speaking countries, but we do not know simply whether the same challenges and same benefits are translatable across languages. Bias is a great example of that. I mean, if a lot of the bias is stemming from the data itself being biased and the data itself is in English, how does that translate to other languages? Is the bias greater, exacerbated or not in other languages? So we also know a lot of the research, or at least in the time of drafting the work paper, a lot of the research has focused on the kind of absolute challenges of AI tools, so, for instance, a lot of the research has repeated, you know, AI tools are biased and that's fine, it's an incredibly important point to raise, but I would like to be a little bit controversial here and say, well, but we also know that teachers are biased in assessment. And so, in a way, the question that I think we should also start asking is, well, where are AI tools more or less biased? Who are they more or less biased against? What are the circumstances when they are more or less biased? Same principle, take the example of VR and AI tools. I mean, are they truly better compared to non-AI driven VR tools? Are they truly better compared to simple 2D videos? These are the relative questions which I feel are at the moment being unexplored. So that's one point.
The second point which I would like to bring in is the role of teachers and we have touched upon some of these things already, but I think it's absolutely crucial that policy makers ask teachers what teachers need, so that then they target their guidelines and their advice from the central level to the teaching level based on that. I think if we want teachers to adapt these AI tools, then they definitely need to have opportunities to learn about them, to learn about which AI tools are appropriate under which circumstances and when are they not appropriate, and also for teachers to be aware of the broader social aspects of using AI tools, for instance, about the environmental impact of AI tools and the fact that there are other social costs of using them, which at the moment maybe some teachers and students might not be aware of.
And then the last point related to teachers is the need for inclusive development of AI tools, and this is especially important around AI tools which target equity and inclusion. We need to bring in teachers right from the beginning of developing these AI tools. We need to bring their voices in there.
Elizabeth: Thanks, Samo. How about you, Lydia?
Lydia: I don't have so much to add. I think Samer already mentioned a lot of things that are very important. Just maybe to add on the aspect of bringing teachers in, I think I agree with you all that this is the key thing we have to change at the moment in how we approach AI in education and its development. And I also think we should really safeguard a little bit the role of teachers in education in a more general sense, because right now when we think about AI in education, it's really about how to make it more efficient, how to deliver basically knowledge and learning in a more streamlined way. But that's, to my mind, not really what education is so much about. It's really deeply human and rooted in relationships between students, but also between students and teachers, and also between teachers, honestly. So I think that's something we should protect and make it thrive in the future as well. Yeah, to my mind, that's something to keep in mind, even though all these developments can be pretty great.
But if we're trying to rethink education and that's something we should do, then we should not let that happen just by thinking about AI, but really bring in the voices of other opinions and other people as well have something to say about this?
Elizabeth: And I think what you've both done so beautifully today is really bring to the forefront that inclusion is not just about those metrics of achievement, but there is so much more to inclusion than that. It does include that, but it goes so far beyond into those social and emotional aspects, and we don't understand enough about AI's role in that yet, but we need to keep that front of mind. So thank you so much, both of you, for really highlighting that for our audience.
Joe: Thank you so much, Samo and Lydia, for sharing with us so many insights on so many different levels. It's just been amazing. We really appreciate it.
So that brings us to the end of today's discussion, and as always, we encourage you to go and explore the working paper, which is linked on the website and you can find links to the tools we discussed during the episode on the ADCET website at www.adcet.edu.au/ilotathings, I-L-O-T-A-T-H-I-N-G-S.
Darren: And of course, we'd love to hear from you as we want this podcast to be an ongoing conversation. If you have a question or a comment about anything to do with AI, universal design for learning, or accessibility, or just anything we may have discussed today as well, you can please contact us via old school email at feedback@ilotathings.com.
Elizabeth: Well, that's our time for this episode. And we'd like to say again, a huge thank you to Samo and Lydia for all of the insights that you've shared and everything that you've brought to this conversation. Thank you for providing this time for us to really deeply think about equity and inclusion and the impact that AI might have.
So a huge thank you from all of us who have joined you here today. For anyone in the audience listening, please join us next episode as we continue to explore a lot of things. Till then, take care and keep on learning. Bye.
Samo: Bye. Thank you very much for having us. It's been a real pleasure.
Lydia: Yeah, thanks so much for your interest. And it's such a pleasure to talk with you.
Darren: Thank you all. Bye.
Joe: See you next time.
Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognise that education and the sharing of knowledge has taken place on traditional lands for thousands of years.