ADCET

ILOTA Things: Episode 9 - Done and Dusted - Reflecting on 2024 and Predicting the Future

Darren Britten, Elizabeth Hitches, Joe Houghton Season 1 Episode 9

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this episode, titled Done and Dusted - Reflecting on 2024 and Predicting the Future, we're going to reflect back on the past year and discuss some of the many changes and advancements in generative AI, Universal Design for Learning and how this evolution has provided opportunities for student and staff to improve accessibility and inclusion for everyone. 

More information including episode notes and links are available on the ADCET website.

Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Darren: Hello and welcome from whenever, wherever and however you are joining us and thank you for your time as we investigate ILOTA things, that is Inclusive Learning Opportunities through AI. My name is Darren Britten and joining me once again in our Artificial Intelligence, Universal Design and Accessibility hot air balloon, my co hosts, Elizabeth Hitches,

Elizabeth:  Hi there. 

Darren: and Jo Houghton,

Joe: Hi there. 

Darren: Today's episode is titled Done and Dusted, Reflecting on 2024 and Predicting the Future, where we're going to take advantage of our great vantage point in our hot air balloon and take a look back at 2024 as it disappears behind us and at the myriad of AI updates and releases that have happened and the changes from UDL 2.0 to 3.0 and undertake the very easy task that we have of predicting what's coming our way in 2025, because we know predictions are always easy to do. So, Joe, as our go to ‘what the hell's been happening with AI’ resident, I was wondering if you could kick us off with a chronological trip down AI memory lane, starting way back in January 2024.

Joe: Oh, my goodness. I don't know that I can do a chronological month by month, but I mean, I can reflect perhaps on what we've seen in the past year because there's been a lot. I mean, I do a weekly newsletter on AI and I must admit, when I started it about 18 months ago, I thought it would run for a half a dozen issues and then I'd run out of things to write about. And every week I've been finding plenty of stuff to put in here, plenty of updates and stuff. I mean, big trends that we've seen I suppose in 2024, the rise of a lot more tools, you know, the AI space is very popular, there's billions and billions being pumped into the big names, but there's a lot of smaller tools, you know, arriving every week. So even keeping up with what's just out there to play with is a constant, constant battle. 

Voice, I think, has been for me one of the big changes, particularly in the second half of this year. The capabilities to interact with a lot of the AI tools has changed from primarily being a text input medium and certainly even in the last two or three months, we've seen the advanced ChatGPT voice has arrived. We're going to have a look at EVI, I think, later on, or listen to EVI later on, and this is changing the game for many people because for a good proportion of people, perhaps keyboards and typing is not an optimal way of interacting with the computer. But now we've got the possibility of using voice to interact with the computer. And this is only going to get built on as we go further.

We've got a plethora of AI chatbots. It's amazing, I mean, the research still seems to show that kind of 70% of people are still just using ChatGPT. It's kind of, it was the first one to market, it's the one everybody heard of, and it's kind of the one that they all know. And you know, even now, when I'm doing introductions to AI to all kinds of audiences, corporate audiences, educational audiences, and I introduce Perplexity and Claude, they're saying, oh, what are these? Kind of we've not heard of these before. I mean, we're in a little AI bubble, aren't we? So, you know, we use these tools all the time and we know about them, but it's really interesting to recognize that we're still very early on in the adoption curve of these tools. So, you know, if you're listening to this podcast, you're kind of almost preselecting yourself into being interested in AI and UDL and inclusivity and stuff, but most people aren't. So, you know, bearing that in mind is we're still having to bring people from a very low knowledge base about AI and about all the things that then come with it, the ethical considerations, the equity considerations, or the engagement considerations. So I think the voice and then video coming through, the AI chatbots have been super powered this year, they've got more options. One of the things that was a big deal in kind of last year was that a lot of them by default shared their data because the AI companies needed training data but there was quite a big pushback from the AI community on that being a default. So things like Claude and Perplexity, now, I don't think share their data by default. You can turn off data sharing now in ChatGPT finally, although, you know, it's hidden in a third level menu and it's not obvious how to do it necessarily, but you can do that. So we've got more options.

I think certainly in the back half of this year, we've seen the emergence of the start of personalization of the AI tool. So, we've had this magic box for two years now, this week was two years ago when ChatGPT was released, as we're recording this end of November. So we've got the possibility now to start putting in things like what we want and what we don't want. ChatGPT added an option called memory. So, if I say I'm colorblind or I live in Dublin or I like, you know, dogs or whatever it is, the AI picks that up as a kind of personal anecdote and squirrels that away in the memory. Now, you can go in and edit it or whatever, but that builds a richer picture of me when ChatGPT is responding to questions in the future. And that persists across chats, and we're getting that built into other chatbots as well. Yesterday, Claude announced styles, and this is an interesting one. You may not have seen this. Elizabeth's looking kind of quizzical at me. Yeah. So, Claude now has the ability to write in different styles. So it offers you four different styles, you know, kind of concise and that kind of thing. But you can also upload a piece of your own writing now and create a custom style.

So it will analyze your writing and it will create a style that then, you know, you can choose and now when you ask Claude a question, it will respond in your style. Now, how good this is yet, you know, we need to test it and see. I tried one last yesterday and it was. It was all right, but it was still a little bit kind of American and jingoistic for me, if you like. So I think you'd need to tweak these things, but this is coming. All this, all this stuff is coming. So I think, yeah, I think we're still on the early stages of the journey. Even two years in. 

I mean, AI has been going great guns. UDL's had huge changes this year with 3.0 being released from cast. Elizabeth, talk to us about that and how that's affecting things.

Elizabeth: Yeah, it's been a really exciting year. So for anyone who may not have been watching this space with UDL, I think it might have been around July 31, late July, very early August, that the 3.0 guidelines got released. And why this is so exciting, well we have for a long time now had the idea around the what of learning, so what is it the students will be learning or engaging with? We've had the why? Why are they going to engage with this? How can we really give them that reason? And we've also had the how. So how are they actually going to go about that learning process? How are they going to be able to really express what they know? One of the really interesting ways that the UDL framework has actually developed over this time is we now have the Who. So we're really thinking about what are the diversity of our learners engaging with our learning experiences. How can we make sure that we're really representing diversity? And there are a few different ways that I like to think about that. So I can think about it in terms of if I'm in a higher education course, what literature are students going to be engaging with? Are we looking at academics only from certain contexts or researchers from any certain context where perhaps a diverse set of views on a particular concept could be really beneficial? I also think about it in terms of visual representation. What's on our slides? What is in those physical and digital spaces? If we're talking about higher education students, are we seeing images that really show the diversity of what a higher education student looks like? A diversity of age ranges, of ethnicities, of genders, or are we seeing a really homogenous set of images? Because you can imagine if you're a student, you're engaging with maybe a PowerPoint and maybe a set of posters. If none of those students look like you, we're really not going to be doing a good job of saying, yes, you are considered in this learning experience. You are valued in this experience, and we want you to be here. So I'm incredibly excited that we now have that representation of diversity really is a key part in that UDL framework. 

We also have some new ideas around welcoming identities and also valuing modes of expression. And the idea around those valuing of modes is that if we are offering options, which we should be, how are we doing that job to show that every single way that is being offered or is open to students is equally valued? So not just having the options there, but knowing that, you know, if you're in an online class, if you are putting messages into the chat, or if you're using audio, both of those are equally valued and equally valid contributions to that class.

The other thing that is really interesting to see is the overarching part of that framework, which really centers on learner agency. So how are we supporting learner agency in those learning experiences? So if we take that to that AI realm, as Joe's mentioned, there are lots of different ways now that people can engage with that technology, the types of things that are input to that technology and the types of outputs that we receive back. Different formats are all available. So there's much more attention to the diversity of ways that people might like to engage and respond with that. 

In terms of learner agency, I think we have some really interesting work to be doing in terms of how do we support students to have that agency around their use of AI, how do we give them that toolkit to critically analyze and critically think about how they're using it, why they're using it, and what they might be using it for.

So many different ways that as UDL advances and AI advances, we can really think about how we can leverage the two and really help that intersectional space of AI, UDL and accessibility continue to advance. 

Now, Darren, over to you. What have you seen change? Is there anything in terms of students and accessibility in 2024? 

Darren: Absolutely nothing at all, of course kidding there. There's been a massive change with the generative AI coming on board. Look, one of the major ones has certainly been, as Joe has touched on, the improvements in audio that's there, which has led to a whole range of different tools being more capable and more accurate, of course, speech recognition, that's happening. Getting transcripts of audio of lectures of those kind of things. So one of the key things for me has been certainly in the AI, in cleaning up that audio and reducing background noise that's picked up from your microphone. And you're starting to see that in some applications where you can turn that on and it's doing filtering and it's starting to clear up and do some noise cancelling in the background. That's just getting built in. Look, Adobe's got their really good audio cleaner tool that they've got as part of their suite of tools that's there, which is really good at doing that. And there's a lot of other improvements in that realm. 

The side benefit of having that cleaned up is better speech recognition. You know, if you don't have as many interruptions in the noise and you've got better quality speech. Along with the improvements in speech recognition through AI, the two of those are going hand in hand, as Joe was saying, so we're starting to get more of that voice to voice and speech to speech tools that are coming along. For some students, that's a big game changer. You know, my typing or my ability to input text or communicate in a written form may be limited, but being able to speak and have it accurately, closer to more accurately, should I say, recognize my speech and respond to me in kind with that. So we're really starting to see the text and speech mode and in, as Joe was mentioning, ChatGPT and the voice function of interacting, you can have that conversation, but in the background is also the text that's there. So, you can swap back over to the whole text dialogue that happens so you've got a copy of all of that. So I can sit back, not look at the screen, have a whole conversation and I've got all of that text automatically there. And that has not been possible in the past to do that and in more or less real time that that's happening. So that's been a huge improvement.

Along with that has been speech recognition for students that are using dictation in their various devices as well. So you know, being able to quickly input again without from the keyboard, so Microsoft Dictate is using some of these functions as well, Google speech input, and there's a whole range of tools now that are doing that. While that's improved dramatically, it's still not at 100%. You'll still get better accuracy from a human transcriber, but we're well up to getting towards the 95% plus if your voice, language and accent meet certain profiles in certain engines. It's certainly not for everybody from every language, but it's improving, it's improving day on day, week on week. It's getting much better. So, look, I've been excited about that. 

The other side of big advancements this year has been the integration of those technologies into existing assistive technologies. So that AI is coming on board and I suppose the obvious ones for me have been where you've seen that get into Grammarly, probably one of the most popular applications with students for spell checking. But Grammarly has AI built in, so tools like Glean, which is a note taking software that some students use, it relies again on using a transcript, some of the models have the ability to generate the transcript as well. So the transcript that you put your notes, et cetera against, and it's now introduced that ability to do a quick summary and an overview and do a quick flashcard or a quick quiz on that transcript that's there. So having that transcript and now this AI being bundled into the tool is certainly improving that. 

The flip side of that is the risk that now these tools have AI and we're starting to see that some institutions are not wanting students to use these tools because they've got AI. And the big blanket, it's got AI, you can't use that. That's for another whole show which we might touch on in 2025, as that discussion continues, AI and AT and the merging of those. 

The other big game changer, and this is probably the last one that I'll put in for this year, has been image recognition. At the start of the year, that was still really hit and miss, you know, and we touched on that early on as well. I think it was our first episode was on getting image descriptions and it was a good start, et cetera. If we were to do that show now, we just go, well, that's great. It doesn't have a lot of these issues that we had before, there's still some. But it's a great asset for students and for staff putting things together to quickly get this image description. But I can ask for it in a particular context for the rest of the learning material that's there. There's options for that inside screen reading software, inside NVDA and JAWS, which are the two most popular, I suppose, screen reading software that will automatically help describe images for students, help describe the layout of a website which they've never had access to before. They'll get the technical aspects of ‘there's this heading by this structure’, et cetera, but there's no sense of the mood that that's portraying because of pictures of people they've got on screen. It's looking friendly and it's in its bright colors and it's these kind of things that you just don't get, you know, that's the visual aspect of that gives a certain amount of emotion, et cetera. So that's kind of changing or it has changed and it's continuing to change. 

What I'm probably really looking forward to, if we're starting to cast our mind forward, I'm looking forward to seeing more of that AI finding its way into assistive technologies and some of those being adapted into various things, we're seeing that in teaching and learning. We're seeing that in a bunch of the learning management systems and those kind of things getting this built in, vendors are also starting to build in AI. 

So, Joe, I suppose I'll throw back to you with those advancements and integrations with AI and what's got you excited for 2025? 

Joe: Oh, this is the scary bit, isn't it? This is the one where in a year's time or two years time, you're going to go back to the archive and you're going to say, here's what Joe said. Yeah, yeah, nd then you all laugh at me because it was completely different. Yeah. So we are going to have flying cars and we are all going to be androids by the end of 2025. No, we're not. No, no. 

Okay, so where are we going? I mean, obviously it's going to be incremental change to some extent, but I mean, if you look at the pace of change over the last two years and we think that that curve is probably on an acceleration curve going up rather than flattening out, who knows what's coming down the line.

I mean, you know, I just scribbled a few points down of things that I think are coming. I mean, this year, tools like Otter and the transcription tools, you know, the tool built into Zoom and stuff like that, they've got so much better at creating summaries and action points from meetings and extracting that from, you know, complex discussions and whatever, that kind of stuff is just going to become ubiquitous.

I mean, Apple Intelligence has been launched and is kind of rolling out over the next few months globally. It's going to be built into, you know, all the Office apps. You can pay for it now and get Copilot in all the MS Office apps. Google have got the same kind of thing in their suite as well. 

Voice input will just get better and better. I mean, we've got now automatic real time translation. So this is, I mean, from a UDL perspective and I can see Elizabeth nodding there in the corner. Yeah, this is, this is just wonderful, isn't it? I mean, you know, you can now get an earbud that you stick in your ear. Yeah. And it's like the Babel fish from, you know, Life of the Universe and whatever, and it'll do automatic real time translation. So I can give a lesson in English and you can hear it in, you know, Malay or in Mandarin or French or whatever your, your native language is. And as long as you can tune my droney voice out at the front of the room, yeah, you can, you can listen to it in its own, in your own language. And that's just getting better and better and better.

If Elon carries on doing what he's doing with, with Neuralink, Elon Musk, I mean, they've already done proof of concepts with people now with, you know, brain electrodes and mass things on their heads and whatever, controlling computers and doing stuff just with thought. So, you know, don't, you know, okay, we're talking keyboards and voice, but in a few years we're going to have an implant or we're going to have something that we just, you know, pop on our forehead or something and we just think at the computer and it's writing for us, it's doing things for us and whatever. And this, it sounds like kind of crazy science fiction, but this works today in its early, early stages. And we all know how fast the technology moves once it's been kind of made to work in a lab, it's, it's very, very quick now to move into mass market. So we're going to see this stuff, the blending of AI and hardware.

I mean, Claude announced a couple of months ago agents that they call the agents are coming that can control your screen. So it will be able to, you know, kind of fire up word, write some paragraphs or whatever, export that as a PDF and then put it in a folder, you know, and if you can describe that sequence, then the AIs will be able to do this. It's early stages right now, but over the next year or so we're going to see this just explode.This is one of the next big changes in AI we're going to see in 2025. This agentic breaking down tasks into smaller tasks and then linking those tasks together into bigger, more complex sequences that will go across different applications and stuff. So that's coming. 

From an educational point of view one of my hopes, and I don't know whether it will happen in 2025, but one of my hopes is that AI is started to be accepted in education because at the moment I'm still seeing a lot of educational institutions, you know, looking at AI and saying this is cheating, you know, this is negative technology rather than positive technology. I mean, I think I was reading about some UK school that completely has banned AI, you know, and I mean, it just seems to be, to be such a retrograde way of thinking about the technology. This is, you know, are we glass half full or are we glass half empty? I'm glass half full. I think, you know, Darren and Elizabeth are as well. But it just, it just seems very sad that people are still in a state of fear rather than a state of, okay, wonder and opportunity and what can we do with this? So that's where I hope we'll go. 

As a kind of trainer in AI. I'm looking forward to 2025 because in Europe there is an act that's being enacted. It went live in August this year, but it's called the European Artificial Intelligence Act and it mandates that all educational institutions have to have trained their staff on AI. So there's going to be this explosion of training need for certainly European educational institutions and I would be surprised if similar things don't happen around the world. Now, Australia tends to be ahead of that curve. So you may be seeing this already, but that's certainly going to be an opportunity, but also a challenge for many educational institutions to actually get that onto the books, if you like, and get everybody up to speed and then hopefully will we see teachers changing their approaches to leverage multiple means of representation, multiple means of engagement, all that kind of stuff. So I think that possibility for leveraging the personalized learning potential of AI, I think that's something that we're going to see coming through in, in the capabilities that AI brings us. But then can we do that? So, I'm going to throw over to Elizabeth.

Elizabeth: For me, looking towards 2025. You know, you asked about UDL and where that might be going in 2025, and I think we are definitely in a space where we're seeing more UDL adoption happening, but I think we're also still in a space where there's work that needs to be done to really help build capabilities in this area for staff across the sector, build staff confidence. It's really hard to test a new strategy out if you're not actually sure it's something that you're confident in knowing what this even looks like or feels like, or what the benefits might be, whether there are benefits to the particular time and energy that it takes to do that upskilling. And so I think there need to be some conversations around how do we do this work? How do we build capability, build staff confidence? Being mindful of the workloads of staff, I know many people are experiencing that state of overwork at the moment, so how do we build it into workloads to make it manageable? 

How do we also make it recognize that accessibility, inclusion, universal design, these are things that are applicable to every single role, whether you're a student, whether you are a staff member who is a student facing staff member, from our student support offices, to our teaching staff in the classroom, to the learning designers, every single one of those roles all the way through needs to be considering accessibility and inclusion for the students and also for themselves and those working around them. So how do we get this across the whole sector and make it everybody's business? 

The other thing we need to do with this new update to the UDL guidelines is really help to build and share some examples where there might be new considerations that we haven't come across before. So we might be looking at that principle around representing diversity, but that could be very different in an Arts based field, as opposed to a STEM based field or perhaps our work integrated learning unit. What does it look like in those places? So we're really going to have to be doing that work to develop and share examples. And I think, thankfully, people I interact with in this space, they're all really open to sharing their practices and learning from each other and building those collaborative and collective spaces for that knowledge sharing to occur. 

In terms of students, I think based on some interesting examples we've seen this year, I think we need to do a lot of work in 2025 to support our students in their use of AI. So earlier this year we brought you that example of people being told to eat a small rock a day. And we laughed about that, saying, okay, we know that we shouldn't be eating a small rock a day. We're not going to take that advice and go, great, let me go find the next rock. You know, we're going to look at that and think, actually based on what we know about nutrition, what we've been taught is food and not food, we wouldn't be taking that advice. But what happens in a particular course where students may be not as knowledgeable on that, on that field and may not realize that there's misinformation or inaccuracies in that data? Perhaps it's really minor and it's not a big deal, but what happens if in a chemistry experiment, as we've mentioned before, certain steps are summarised or skipped and important safety measures aren't taken? So I think that critical thinking space, we really need to be supporting students to do that critical thinking and really know how this data is really being put together. What do those responses from the AI actually mean? How can we really take that critical thinking lens to it, not just take everything as truth, but really support students in knowing how to go about critically analyzing what is developed by that AI? 

I also think we need to be doing some work around supporting student safety. And the reason I say that I came across a particular article very recently. I think CBS News was reporting it and I'm going to read the headline to you because it definitely caught my attention. So the news headline is ‘Michigan College student who was told to please die by Google AI Chatbot wants these tools held responsible’. Why I think this is an important example is that I don't know the details around how the student was interacting with that particular AI, but what came out of that interaction was some very distressing content, some content that the student, the student really felt was personally aimed at them and it was because the AI wording was more or less saying, this is not aimed at all humanity, this is aimed at you personally. You know, how many ways can you take that? You can't really take that any other way than this AI is talking to me directly. So, it's made me think about what we have to do in research and in ethics applications. We need to consider if our conversations might be a source of distress or might in some way provoke a distress response even after that research has taken place. And we need to put in safeguards. What do students do if they do experience that? Where can they go? Both inside the institution and outside the institution? I think it's going to be really important for us to really think about how we make this happen in the AI space if we're the ones putting that AI on the students table. So how do we make sure they're critically reflecting on the AI tool, that they know how the AI is working, that it's drawing on that data, drawing on patterns in the data. And if a dark or rogue comment comes out, to know that that is not about them, that is just some pattern that's been tweaked, and it's just, you know, the stars have aligned in a really bad way and you've gotten a response that if you perhaps prompted it at a different time, in a different place, it may not have come about. So how do we prepare students so that we don't have anyone, particularly students who might be quite vulnerable to those types of comments, to make sure that they don't experience distress if that comes up. 

Now, I know this is a very rare example, but I think it's worth mentioning when this happens even once, because if it can happen once, it could happen again and we want to make sure that we've prepared our students in the best way possible so that there's no adverse events from their playing and exploration with AI?

I can see Joe's really eager to jump in, so I'd love to hear your thoughts on that, Joe.

Joe: Yeah, I mean, I know this is a darker turn on kind of what we're normally talking about, but I mean, you know, building on what Elizabeth said, that there was also a case mid November of a student who actually took their own life as a result of interacting with an AI girlfriend bot that again, you know, the conversations went the wrong way and sent this student spiraling down into what eventually ended up as taking their own life. So this is a very, very powerful technology that we're starting to interact with. You know, the computer becomes another person to us. And if you're in a sort of, you know, a mental space where you are vulnerable, yeah, and you know, we know that that kind of young people are in that high risk space, and I mean, I've worked in mental health for 40 years. So this is, this is a known, you know, kind of area of concern. So, you know, as AI educators, as AI evangelists, if you like, it's incumbent on us to really make sure that all our students and the staff that we are interacting with are aware of not only the upsides, you know, and we, that's what we play to most of the time, but also the downsides, the potential for harm here.

So I think Elizabeth's point is really well made. I'll hand it back to you, Elizabeth. 

Elizabeth: Thanks, Joe, it's just actually sparked an idea right now. So this is not something I was going to mention before, but this is why our conversations, I enjoy them so much. So it's made me think maybe we need to have some sort of system where students know if one of these comments comes up or if one of these conversations takes a dark turn, how do they report it? Where do they report it? How do they know that it's not okay and how do they know that something's going to be done about it? So really arming them with those tools to be able to respond in moments like that. And yeah, I don't have an answer for how that can happen at the moment. I think that's going to be a broader conversation. 

Joe: Yeah, I mean, it definitely could happen, I mean, if you look at Google Maps, Google Maps recently got a new update and now in the bottom left hand corner of my Google Maps on my dashboard is a little kind of triangle with an exclamation mark in it. And if you go past an accident on the road or, you know, a blockage or something like that, you can just tap on that and it gives you a six buttons, you know, is this an accident? Is it a police check? Is it a blockage on the road or whatever? and you just tap the button and it just sends that information off to Google. Now building something like that into the chat bots at the moment, they have a plus and they have a minus, you know, a thumbs up and a thumbs down for stuff. You could have a warning, you know, that was a higher level response to a chat response and that could be built in tomorrow if they wanted to do this stuff. 

Elizabeth: Great idea, Joe. And I think, yeah, as much as this has taken a darker turn to the conversation than we thought. I think when we are looking towards the future and we're looking towards 2025, in an ideal world, everybody would be interacting with this safely and having those positive experiences. And for us to really make that happen, for that blue sky thinking to really happen, then we need to be considering, what do we do with the clouds? How do we, how do we manage those clouds that pop up? So I think it's a really interesting conversation and hopefully for those of you who are planning on thinking about integrating AI into your courses in 2025, you can also think, how do you also have that conversation with students just to fully prepare them on how the technology works, maybe even bring up the examples of some of those rogue cases, make it a discussion point just to say, FYI, as fantastic as these tools can be, here's what happens when things go a little awry, and here's what you do if you experience this. 

Darren: Look, I think with that comes something we'll probably find more of in 2025 as well, and that's regulation around some of these. A lot of governments are still trying to get their head around exactly what this means. That's not to stop there being a rogue educator either, but we've already got bubbles of information happening through social media, through other networks, et cetera. And the risk is the same thing will happen with some AI and they'll just feed on same input in a closed environment and do exactly that. So, you know, there's. Look, there is that negative side, don't get me wrong, but there's also my point of view with some students with disabilities that there's, you know, the positives for me at the moment certainly outweigh that and I think we need to keep that in the conversation as well so that we're not throwing that out along with its academic integrity, and it's bad, and it's going to write essays for you, as we've discussed before. But it has huge implications for students being able to be independent, to be autonomous in this space. 

And I think, as Joe mentioned, you know, I think it will 2025 will really be that year of personalized AI. That notion of memory and getting to know me and do that, and some of the tools that are there at the moment that can assist, which we'll get to in a minute, like the emotive voice interface that picks up on what you're saying and how you're saying it. So it's not just text that it's reading, which you can interpret in a whole range. We know, people can misinterpret an email, ‘oh they're being angry with you, ‘I wasn't angry at all’. Or they're upset. ‘No, I'm not upset. I was actually quite cheery. But now you mentioned it’, et cetera.

That can pick up on. You sound a little bit distressed or you sound anxious, or you sound really cheery about this. So getting that as an extra form of input that goes in there, that can then be a trigger for, hang on, you're sounding upset with. Let's wheel this back a little bit and dive into that.

So I think, look, we're looking at a lot of changes in 2025, but like two that are some bits of tech that are out there at the moment that are just going to improve no end, and we've had various versions of these from, you know, Google Glass or Google Eye and, you know, those kind of things. There's two that have caught my attention in the, in the last month or so, and that's Hearview AI, which again is some glasses which will, through your smartphone that will pick up speech that's happening and it will turn subtitles or captioning on to the glasses that you're wearing so that you can be part of that conversation live.

There's another company called Even Realities GI. They've got glasses that look just pretty close to regular glasses and you can get them prescription lenses. This is new technology that, you know, builds a screen into or a filter that sits inside the glassware and things like that, so it's not like a big projector sitting on it, they look like normal glasses frames, they've just got a little bit of extra, you know, material at the back behind your ear, you know, on the arms of the glasses that again, will hook through to your phone and can do a whole range of different things that can assist, you know, individuals for a whole range of things. And that will just, I think, become a bit more ubiquitous, not just for students with a disability. They're the ones that will see the biggest access and advantage in this. A couple of those companies and some others are also working on translating sign language. So again, so for some people they can communicate back, but it will then voice that for people so I can sign and it can see me sign and it might translate back into speech for those that don't read sign. So I can go out and now communicate with people along with the AI, the noise filtering and that. I know some of these have been trialed in noisy restaurants and environments like that where it can pick up who's speaking? I'm looking at that person. It will instantly start to drop out, all the extra noise that's there. So I can still have a conversation with somebody rather than, ‘Can you repeat that? Can you repeat that?’ Look, they're all compounding a lot of these technologies together and I think, look, that that's exciting for me, but yes, we do need to be mindful of what's there.

I suppose with that view and being mindful of some things however, Elizabeth and Jo, I'd like to welcome along our surprise guest for today, or maybe not surprise, Joe, you mentioned EVI earlier or EVI, the Emotive Voice Interface and they're an AI engine that's going to help us, I'm hoping today do some crystal ball gazing into the future. 

So first off, EVI, welcome to the ILOTA Things podcast. 

EVI: Hey there. Great to meet you. I'm doing well and excited to chat with you. How's your day going? 

Darren: My day's good. We're having a podcast with my friends Jo and Elizabeth. We're looking at the future about artificial intelligence, universal design for learning, and accessibility, and we're trying to predict some of the future things in 2025. And I was wondering, how do you predict AI working for and with students with a disability in 2025? 

EVI: Based on current trends, AI in 2025 will likely provide personalized learning experiences through adaptive technologies that automatically adjust to each student's needs. We'll see more sophisticated text to speech, speech to text, and real time captioning systems making content more accessible. AI will also power smart assistive devices that help students with physical disabilities interact more naturally with their learning environment. 

Elizabeth: Thanks so much EVI. That was really fascinating to hear your predictions. I really appreciate it. I'd love to know from a teacher perspective, how do you see AI might assist teachers in implementing universal design for learning in 2025? 

EVI: By 2025, AI will be a powerful ally for teachers implementing UDL principles. It will automatically generate multiple versions of learning materials, create instant translations, and suggest differentiated teaching strategies based on real time student engagement data. The technology will help teachers quickly adapt content into various formats, visual, auditory and interactive, ensuring every student can access the curriculum in ways that work best for them. 

Elizabeth: I wonder if you could tell me if someone was just starting out learning about UDL and was applying it to their teaching practice. How do you think AI might support them to do that in 2025? 

EVI: For academics just starting with UDL in 2025, AI will serve as an intelligent mentor, offering real time suggestions for making content more accessible, it will analyze their teaching materials and automatically highlight opportunities to incorporate multiple means of engagement, representation and action. Think of it as having a knowledgeable UDL coach available 24/7 providing practical suggestions to transform traditional lessons into inclusive learning experiences. 

Joe: And we've got that to some extent already in the AI app LUDIA, which was created by Beth Stark and Jeremy Rostin I think in 2023. So if you haven't looked at that, just Google LUDIA. It's one of the PO AI apps, but it's very, very good for creating and helping educators leverage UDL. So as a student EVI, studying at university, how will I be able to leverage AI better for my studies in 2025? And I want specific app examples and recommendations from you.

EVI: By 2025 you'll likely have AI study companions like upgraded versions of Claude, Anthropic and GPT integrated directly into your learning management systems. These will offer real time essay feedback, create personalized study guides and help break down complex concepts. Tools like Otter.AI will provide enhanced lecture transcription with automatic concept linking and study note generation.

Apps like Notion and Evernote will have built in AI that organizes your notes and creates interactive study materials. The key will be using these tools to enhance your learning, not replace critical thinking. 

Joe: What about students with learning difficulties? Students from disadvantaged backgrounds? How will AI help them in 2025? 

EVI: By 2025, AI will offer personalized learning tools that automatically adapt to different learning styles and needs. For students with dyslexia, AI will provide real time text to speech with adjustable reading speeds and highlighting. For multilingual learners, instant translation and cultural context tools will help bridge language gaps. AI will also offer smart note taking assistance that can restructure information into visual formats, mind maps or simplified explanations based on individual learning preferences and accessibility needs.

Joe: So EVI, this isn't a question, but we are asking for a response from you. We'd just like to say thank you for your insights and interestingly they did match kind of what we have been discussing over the past few episodes. So that was good to hear.

EVI: You're very welcome. It's been fantastic discussing these exciting developments in AI, UDL and accessibility. The future looks promising for creating more inclusive and personalized learning environments. Thanks for having me on the ILOTA Things podcast and best wishes with your important work in education.

Elizabeth: I'd love to pull out two points from that conversation. So one of them was the AI reminding us that AI is not a replacement for the learner but is that complement to the learner And I think that's a really great mindset to take forward, and I also love that that EVI’s summary is really that the future is looking really bright for inclusion and for AI. I think to see how many advancements have taken place in just one year. Can you imagine where we're going to be by this time next year? 

So now, thanks so much EVI for that conversation and a huge thank you to all of you here, every single one of you who has supported us, who has listened to each of these podcasts that have been developed in 2024 before wrapping up we'd really just love to give you that really huge thank you. Without you supporting us here, without you engaging, the field wouldn't have been as developed as it was, and we wouldn't have also had so many rich conversations. 

Now we've really tried to provide a range of information that we hope has been useful across our time together and really helped to identify some of those inclusive learning opportunities through AI. There is so much more to talk about though, and so much more for us to explore together. But we really hope you get a chance to take some time over the end of the year break to have a play with some of these tools and of course let us know how that plan goes. What do you explore? What are you finding out? What have you discovered that can all form part of our conversations in 2025? So, we're really eager to hear from you. 

As always, you can find the links to the tools that we've discussed in this episode on the ADCET website at www.adcet.edu.au/ilotathings, or you can get in touch with us via email feedback@ilotathings.com

Now before we do an official sign off, I think it's a really great point to also share a couple of very funny bloopers, so I'm going to share those, anecdotally. What you may not have realized throughout this series is that I always, always have trouble getting the letters come out of my mouth the right way around for chat GPT. I naturally say GTP. So there have been many moments in this podcast where we have all burst out laughing and I've had to do a retake and really consciously spell out GPT. So that's one of my favorite bloopers and also the number of times that I keep forgetting whether we have a forward slash or a backslash in our website address. So I really wish we could share those outtakes with you. Maybe we'll in the future, if you want a bit of a laugh, it's been a lot of fun. 

So a big thanks as well to my co hosts, Joe and Darren, who always turn up authentically as themselves and always let me turn up authentically as myself too and have these really fun conversations together. So for the last time in 2024, I'm going to sign over, head over to Joe for our sign off. 

Joe: Thanks, Elizabeth. Yeah, and I mean, I've, I've enjoyed this and I know Darren has too. You know, I look forward each week to, to my 8 o'clock start in Dublin here, which is kind of, I think, what, 7pm or 6pm your time. So we really are kind of circumnavigating the globe with this, this podcast. But, you know, this is one of the things that the technology allows us to do now, and it's amazing, isn't it, that we can sit in our respective houses in, you know, dotted around the globe and create something like this with, with friends that we've never even actually physically met. So. So, yeah, I mean, I hope everybody's enjoyed our journey and we're looking forward to continuing it into 2025.

You know, we've tried to give you a glimpse back in this episode into 2024 and into that kind of Venn diagram of AI, UDL and accessibility and, you know, riffing off what Elizabeth said, if you play with tools or if there are any particular topics that you would like us to explore in a future episode, just drop us an email, get in touch. We would love to hear from you. And we're always, you know, up for new discussions or if you know, of somebody that we should invite on the podcast as a guest, that would be absolutely great. 

Darren: And look, I'd just like to also, you know, mirror what's been said by Elizabeth and Jo and thank them for their time this year. I know it's not easy, we've all got very full plates. But look, it is a very interesting area. I think it's a crucial area at the moment, there's a lot of change happening in a small amount of time. We've been trying to make that as clear and as simple as possible, we're trying to simplify a very complicated and complex area. And as we pointed out today, there's some dark sides to some of this technology as well, but I think we're kind of a bit of the eternal optimists as well, seeing they're going. But I think there's more good to come from this if we use it well. And I think that's, that's on all of us to do that and the way you can do that is go and play with the tools as you know, we're all emphasizing this, go and have a play. 

So, from me to Joe and Elizabeth and to all the listeners as well, thank you very much for listening. We hope that you can join us again in the new year because there's so much more of this to come as we continue to explore ILOTA Things.

Till then, take care and keep on learning. 

Elizabeth: Bye. 

Joe: See you from Dublin.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.