ADCET

ILOTA Things: Episode 8.1 - Is AI the Perfect Companion for UDL?

Darren Britten, Elizabeth Hitches, Tom Tobin Season 1 Episode 8

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.

This is the first part of our episode titled Is AI the Perfect Companion for UDL?, where we are joined by special guest Dr. Tom Tobin as we delve into the rise of AI and the potential it offers for practitioners and students in delivering more equitable and accessible education.

More information including episode notes and links are available on the ADCET website.

Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Elizabeth: Hello and welcome from whenever, wherever and however you're joining us and thank you for your time as we investigate a lot of things that is inclusive learning opportunities through AI. My name is Elizabeth Hitches and joining me on the Artificial Intelligence, Universal Design and Accessibility pendulum, are my co hosts, Darren Britten,

Darren: Hello everyone.

Elizabeth: and joining us today in spirit, Joe Houghton, who unfortunately can't be with us today. Now, in this episode titled ‘Is AI the Perfect Companion for UDL?’ We're going to chat about UDL and the rise of AI as a tool that provides opportunities for both practitioners and students to enhance their Universal Design for Learning or UDL practice. 

Now, if we were going to talk about UDL for practitioners, you would be very hard pressed to find a more respected and more recognized name in the UDL sphere than Dr. Tom Tobin. Tom is a founding member of the Center for Teaching, Learning and Mentoring, that's CTLM at the University of Wisconsin, Madison, as well as an internationally recognised scholar, author and speaker on things like quality in technology mediated education, especially copyright, evaluation of teaching practices, academic integrity, accessibility and universal design for learning. And we are really privileged today to be joined by Tom as we dive into this topic.

So first off, a huge welcome Tom, and thank you so much for joining us.

Tom: Hello Elizabeth and Darren and a lot of listeners from Pennsylvania in the United States. Thanks for having me on the podcast. It's a pleasure to be here with you all. 

Elizabeth: Thanks so much Tom. Now you've been in the field of UDL for a number of years now, both as an inclusive practitioner yourself and in training and encouraging other educators. So I was wondering if you could help set the stage for our listeners on your history with UDL and how that's developed over time.

Tom: Well, you have to imagine that it's 1996 and I have a lot less gray hair and I was a lot younger back then. I was working at a two year college in Pennsylvania and they had hired me to help them create their very first online courses. I had a conversation with a business instructor, his name was Marty, and he came to my office and he said, “I think online is junk and I don't think it can actually work the same way that in person classes can and I still want to have a job in 10 years, so teach me how to do it anyway”. And his candor was refreshing. So of course I said, sure, I'll help you out. What you don't know about Marty is that he had gone blind in his 40s due to undiagnosed and so untreated diabetes. 

And I'll put air quotes “He didn't know”, quote, unquote, how to be a blind person. Right. He didn't get around with a cane. He didn't do touch typing. He didn't understand braille. He didn't have a service animal with him.

But since he had been a sighted person, the short version of the story is that we hired a bunch of graduate students from a local university to be Marty's eyes and ears, and I helped that college adopt blackboard version one. So, yes, listeners, I am actually that old. If you're picturing me, I'm a white man with gray hair, glasses, and a giant black mustache.

Marty was really successful in teaching his online business courses. The graduate students would say what the students had written in the discussion forums or turned in with their papers, and Marty would say, feedback and the graduate students would translate that into messages back to the learners.

That was a success until I had a vice president standing in my door saying, we have to shut Marty down. And I said, why? It's actually working. And he said, do you realize how many privacy laws we are violating by letting these students see, you know, all the student work? And we had to stop.

It was that failure that gave me enough pause to think, okay, if it was this challenging to help one person who's out there whom we are not serving well, or maybe not serving at all. And I started to figure out, okay, there's people with work responsibilities, caregiving, and family needs. They just lived far away from campus and couldn't come into our Monday through Thursday, 10 in the morning till 2 in the afternoon schedules. 

And I started to recognize that our biggest barrier isn't disability or a work schedule or the family commitments. Our biggest barrier collectively is the clock. It's time. How many of us listeners got our educations juggling study with all of those other responsibilities, right? And when the folks at CAST, the Center for Applied Special Technology, they just go by the acronym now. It's all hip. It's just CAST. Those neuroscientists figured out that when we learn things, we have to activate different chemical pathways in our brains. And they took that process and simplified it and said, okay, at each stage of the learning process, when we're getting started, when we're taking in information and we're trying to find stuff out, and when we're practicing and trying to show what we know at each of those stages, if we have more than one way to do it, we can make choices. And we have to, as learning designers, optimize those choices. That's the basics of universal design for learning. Listeners, you already know most of that, but that was news to me back in the late 1990s. I was one of the first and few higher education people to know about the work that CAST was doing. And over the years, I've become what the British might call the loyal opposition for CAST. They've gone in one way with a heavy focus on teaching, developing brains in children in elementary and secondary school, and I've gone in a different direction trying to figure out, okay, if we're working with adult learners whose brains and social, emotional learning spaces are already developed from a neurological perspective, where are the barriers for them? And it turns out that we can see variety and variability in our learners coming to us from a lot of different social contexts or economic contexts, different levels of preparation.

And so the research that I've been doing, I've been relying on lots of really good thinking that people have been doing around the world. I've benefited immensely from working with folks like Dara Ryder in Ireland, folks like John Perrow in South Africa, and I have to put a special thanks to all of my Australian hosts in Melbourne and Perth and Adelaide. I just came through on a UDL speaking tour in summer of 2024. So something near and dear to my heart is all of the work that you folks are doing and one of the reasons I really leapt at the chance to come on this podcast. So that's kind of where my path has gone in UDL over time. And I'd love to talk about artificial intelligence, large language models, and generative AI with you today. 

Darren: Well, thank you Thomas and it's perfect that you're here. And look, you just brought up the fear that was there with Marty and I think, look, we're still in that early stages with this AI, there's a lot of fear, that academic integrity, you know the other AI, that sits there in companion with that. So it's no surprise that it's had an impact on education, particularly in the higher education sector, although we're still unraveling what this means for teaching and learning. While we may see and often discuss the potential for AI in reducing barriers to education and increasing the learning Opportunities for students, hence the name of this podcast, I wanted to get your take on those practical applications that you've witnessed that AI has brought into this sector, and how transformative is this new technology?

Tom: Darren, I have to confess right away that I am not the world's greatest expert on generative artificial intelligence. In fact, I'm really late to the game, if you want to say that way. One of the challenges for me is that artificial intelligence is neither artificial nor intelligent, if we're being really strict about the definitions. 

So artificial intelligence as most people understand it, it's almost another manifestation of assistive technology. So those of you who have a bunch of gray hair like me or you've been around for a while, you might remember that back in the day, it was people who had disability barriers in their environments who used assistive technology in order to help level that experience or get access to information and skills and environments that they otherwise wouldn't have had access to. So back in the late 1970s or early 1980s, if you saw somebody holding a magnifier really close to their head and getting really close to a piece of art in a museum, you would think, oh, here's somebody who has a visual challenge and is using assistive technology in order to be able to see well. If you saw somebody walking down the street and they were talking out loud and there wasn't anybody near them, you might think this person might not be firing on all cylinders or might have a mental challenge, right? But you see that today, somebody's got their cell phone earbuds in and they're talking to somebody on a cell phone call. So, a lot of the assistive technology that started out as a means of addressing disability challenges in the environment, now everybody's got those things in their pockets, and they're part of the tool set of your mobile phone. 

Now, when we think about artificial intelligence, it's a similar set of tools. You need to have a little bit of expertise, you need to have a little bit of context, and you need to have a little bit of training in order to use those tools effectively. It used to be that, you know, anybody could use their mobile phone, but you had to sort of know all the applications. And over time, as those applications were developed, it meant that you needed to have less and less specialized knowledge and you could just sort of fumble your way through because the design of the tools themselves became more user friendly.

Now, we're at early stages with artificial intelligence right now, and there's a fun game that we like to play, and that is, if I'm going to tell a story and I would say, you know, Darren and Elizabeth went to the beach and … right … and I just passed to Elizabeth and her idea is add one word. What would you say? 

Elizabeth: Had. 

Tom: Okay, so Elizabeth and Darren went to the beach and they had, and then we'd pass to Darren and he'd add one more word.

Darren: Fun.

Tom: They had fun.And then it would come back to me and I'd say, but, and so on and so forth. And, you know, listeners, you can do the same thing in your own head, the add a word game. That's exactly how artificial intelligence works. It is a prediction engine, meaning it just reads through vast amounts of information, and then when you say something like, yeah, we went to the beach, and it thinks, okay, what are all the other things I've read about the word beach? And it says, here's what's most likely next. It's predictive responses. And those of us who already know how to compose narratives, those of us who already know how to do research, those of us who already know how to design learning experiences in universally designed ways, we can use artificial intelligence in an informed fashion. It helps to shortcut basic steps for us. The challenge is if you don't have that training and expertise, at least right now, and you give artificial intelligence a prompt, you want it to create an image for you, you want it to create a business plan, an outline, an alternative version of some content. You might not know when it's messing up because it's just telling you what's most common or what's most seen in its database. It's not telling you something that it's actually creating. 

So, you know, when we're thinking about AI or universal design for learning, how transformative is it? Not much yet. Because we haven't yet, in the whole development of artificial intelligence, reached the point where that we've reached with the apps on your mobile phone, where anyone can use it because the interface itself is designed for those everyday uses. Right now you have to have a little bit of knowledge and expertise to be able to double check that what the AI is telling us is actually based in reality, based in the research is effective. So that's a starting point. 

Elizabeth: I think that's a fantastic way to frame it, to look at the possibilities, but also with that recognition of the constraints. And so given that we've got those two aspects, possibilities and constraints, how are you seeing people or institutions responding to this or even not responding? 

Tom: Well Elizabeth, it's kind of all over the map as you might expect there's a lot of moral panic. How are we going to stop people from cheating with this? Or, you know, there was a story recently that an artificial intelligence model was able to pass the entrance test for a prestigious university, and you know, is the AI going to be smarter than us? And the answer is the AI is not actually thinking. So that moral panic, we've been through these kinds of series before, right? If you recall the advent of the Internet and people could just copy things and paste things, and suddenly you have, you know, evil houseofcheat.com where you can buy a paper and hand it into your professors, that kind of thing. The challenge for us is that when we're thinking about everyday life circumstances. So I work in higher education. I'm also paying attention to the workplace, our everyday lives, all different places where we can use technology to either help us or help us take shortcuts or even do some work with us and for us. So that moral panic is kind of balanced against boosterism. Hey, here is AI. How can you use it? And that boosterism isn't yet grounded in need cases. I'm not a big fan of saying, here's a new tool, what uses can you put it to? That's a legitimate question. And it's typically not a very useful one, especially in the early days of an industry or a tool. That's kind of the wrong end of the telescope through which to be looking. If we flip that around, what are the need cases? Where do I need to take an easier path?, or where can I see that? If I had an easy way, for example, to make lots of good alternative versions of content, and that would save me time and effort and it would lower access barriers for the learners or the folks in the workplace or my colleagues who are reading my podcast or my blog.

Those need statements are ones that AI can certainly help us with. You know, I mean, and too often we kind of assume that the people who are going to use these tools have background knowledge and that foundational skill set. Maybe some of them do and maybe some of them don't. And when we're thinking from a lens of how do we gauge or assess how useful artificial intelligence is for universally designed materials or engagements or ways to show what you know. There's a useful continuum. I'm thinking here of Jose Bowen and Eddie Watson's book Teaching with AI, which is a very boosterish kind of book. They're very pro artificial intelligence and they want to give people scenarios for teaching effectively with large language models, and predictive engines and generative AI. They talk about where there are parts of the curriculum or the body of knowledge in our field where we want to maybe encourage people like students to use artificial intelligence tools.

So I'd like to expand on that spectrum just a little bit. And listeners do this thought experiment with me here. Think about the work that you do. What are those foundational ideas in that work? What are the concepts, the practices that people have to know in order to be good at the work that you do? So if you're a designer, if you're a nurse, if you're a carpenter, whatever that is. And now think about where would we encourage people who were learning how to do that? Where would we encourage them to use those artificial intelligence tools? Where might we require it? And then back down along that spectrum from required to encouraged, where would we just permit them to use the tools? but maybe we wouldn't use it ourselves in a professional capacity. Where would we tolerate people using it? but maybe it's not up at the front of our conversation? and where might we actually discourage people from using it, but if they did, we wouldn't yell about it? And where would we prohibit people from using artificial intelligence tools? So I'm thinking of a continuum from yeah, you have to, to you better not, and everything in between. 

So there's a wonderful example from our colleague Dan Serafin Bhutan. He wrote in Educause Review about how we all need to get on board with AI because our students are already using it and we don't want to get left behind. So in higher education this is a challenge here in the United States, and I'm imagining across the world. That's a technology utopian point of view of here's that tool and go find a use for it, like we talked about a minute ago. And Dan is a good foil for someone like John Warner. He wrote in Inside Higher Ed, I think the article is called not so Fast on Teaching AI Skills. John talks about the idea that coming out with that tool on a tray and saying, go find a use for it is a bad way to create instructional materials or engagements? And it's really difficult to assess how effective that is. What are we actually assessing? Is it that the tool gave us right information and we were able to double check it and it saved us some time, you know, and are we assessing that students know how to use the AI and therefore we tell them to use the tool in this way and then they do it just that way? That's compliance. Right, so we don't want to put our students in situations where they're merely spitting back what we ask them to do. So, you know, to your original questions about, you know, how are people in institutions responding or not responding, that ethics, that practice is still being written and argued right now. And it's a fun space to be in because people haven't really made up their minds yet. 

Darren: Exactly right. You know, what's the saying? Everything looks like a nail to a hammer. Look, there's many people that are still in this discussion, quote, I suppose, muddiness, if I can use that analogy that surrounds AI, on regurgitating fact versus demonstrating learning. And I think it'll take some time for that muddiness and turbulence, as happens with these technologies, to settle and the noise to fall to the bottom and just have a clearer picture that'll emerge.

I remember much as you touched on, you know writing things for students that were there on You can't just Google that. That was the big fear, you can't just use Google, we've kind of moved past that. You know, we let people use the Internet, et cetera and AI is no different in that space, the fear is still driving a lot of that discussion.

As we get that clearer picture and we get some more clear examples of where AI can add value and provide real learning opportunities for educators and for students. Regular listeners will know that in working with students with disabilities, particularly in using AI, I see the impact AI can have as an assistive technology in supporting, you know, that individual learner agency. But more broadly, I suppose, how do you see AI in supporting students in ways that may not have been possible or just practical previously? You touched on Marty, you know, with some of those tools, if you had those in the 90s, how would have this been able to assist? 

Tom: I might not actually have had a career had these things existed in the 90s, and that would have been absolutely splendid. But artificial intelligence, Darren, allows our learners, now they can have an individual tutor who can personalize study for them. With the 3.0 updates to the UDL guidelines, they switched the language around the primary goal of universally designed experiences and they said, we want to create greater learner agency. And you've used that phrase yourself.

I'd encourage everybody, listeners, if you're not already doing it, create a few kinds of activities in your courses, or your training or your engagements, where you ask individual learners to go engage with artificial intelligence tools and say, here's the text of the chapter that I'm reading. Here's the link to the YouTube video that I had to listen to. And ask the artificial intelligence to create some flashcard quiz questions for me for this. We all know in educational development and educational science that one of the worst ways to study is to print out your materials and grab a highlighter and just make it look like a circus tent, right? With color here, color, there's so on and so forth or underlining or whatever it is. And one of the best ways to practice is spaced practice, interleaved practice with self quizzing. Most students don't do that very often because if you don't understand the material to begin with, you're a beginner, you're learning things, it's still challenging. And then throw some disability barriers in there, you know, dyscalculia, dysgraphia, dyslexia, you name it, whatever learning disability or barrier you were talking about. Then it's very difficult to come up with those self quizzing questions. And this is something that artificial intelligence tools are really good at right now. They can summarize, say back in different words and ask intelligent sounding questions about the materials. And what that kind of activity does is it keeps the requirement on the learner to do the interrogation and do the thinking. It's not doing something in place of the learner, it's supporting the learner by coming up with questions that the learner then asks themselves and says, oh, do I know an answer to this? How well can I say this back? So that's one thing I'd encourage everybody to do. Ask some questions about the material from the AI so I'll know how to respond or I'll know it better.

And here's the missing piece though. We shouldn't just trust the AI tools to do that. We should actually have some kind of check, even if it's a spot check, where an actual human being with the skills, like the instructor, our teaching associates looks at what the AI has generated and says, okay, the AI is leading you in a good direction here. Because if you're asking the artificial intelligence engine to give you some study questions about a history test or about a structural engineering problem, it's really good at that. If you're studying Hungarian, it doesn't do very well at all because it has a very small sample set that it's been exposed to and it doesn't really do smaller language groups very well. Spanish, French. Yeah, bring it on. Icelandic, Finnish, Estonian. So, you know, probably not. So having that double check, having that human being, who knows what good questions would look like, just to say, thank you for sharing this with me, use these questions, this is awesome. That's a necessary step, at least for right now.

Or if you're responding well to personalized tutoring, you can make that part of the grade for the course, make it one of the required elements. So you all out there listeners, you don't necessarily have to assess the learning as an intensive effort, only from the instructor. Artificial intelligence now allows us to do that personalized learning, tutoring in ways that are good enough and ways that are better than not having it at all, and in ways that are usually fairly reliable and trustworthy in our disciplines. There are some exceptions though, and we talked about those.

The other thing that we can do is ask our learners to engage in reflective techniques when they're learning things. And we say a lot about reflection on learning being where learning gets cemented. And especially for folks who have different kinds of disability barriers in their environments, it's often that pause for reflection that allows unstructured time for just grinding through the cycles. Because learning is challenging regardless of the circumstances from which you come at it. And even before your learners know something, before you've taught them something or told them or showed them, get them to do a pretest. Ask artificial intelligence to help create a three question quiz in the middle of something that counts for, you know, a check plus or check minus grade. And there's no individual grade associated with the activity, but maybe it's worth 5% of the final grade if you do all of these little exercises. Breaking up assessment into smaller pieces that don't each individually count for enough to, you know, wreck a letter grade from one to the next or something like that. That is assessment as learning, and artificial intelligence can help us to break things up in that way. That also lets us do two awesome things. One, it teaches our learners that they can use reflection and assessment for and by themselves as a means of checking their progress, even when they're not in formal learning situations. So when we think about UDL, how many times have we heard employers want better critical thinking skills out of our graduates? And we try to design our experiences in colleges, universities, technical colleges, and our K12 programs, we try to design those learning experiences to be a little bit more open, a little bit more forgiving, a little bit more multi format so people have optimized choices and the skeptics among us come up and they say, you're not going to get that in the workplace. It's going to be do it my way or you're fired. And my response is, have you been in a workplace recently? Right.

There's more flexibility in the workplace now than there ever has been. Hey boss, this is taking more time than we thought it was going to can we have more resources for this? Can we have more time? Can we move people around to bring skills in? And there's lots of sort of accommodation for individual circumstance that goes way beyond just disability accommodation in the workplace today.

That's really valuable too, because employers have to do all kinds of on the job training to get our graduates up to speed in their fields, even after they get their degrees. And they really value the people who come out of our programming knowing how to do assessment as learning, rather than just assessment of their learning and creating stuff for themselves, and that's an opportunity for artificial intelligence that we're just exploring now.

So also, assessment as learning, if I've been assigned as an instructor to teach a 300 person lecture course, by the way, listeners don't do that to your people. Give them an opportunity to have human interaction with their instructors. It's very common now though, you know, I'll have a couple of teaching assistants if I'm lucky in that big lecture hall and it's difficult for me to give individual feedback to each of my learners in a meaningful way. So we should err on the side of caution by bringing the humans back into that loop. So artificial intelligence is not the only thing that stands between instructors and learners, tutors and learners, support staff and our learners. It's also just size. You know, how do we know if our students have learned anything? We can utilize artificial intelligence to help them, but then we also have to double check with them. We can ask learners to do application tasks after they've run things, run something through AI when they're at a good level of skill to do that. And if they're not, you know, if they're just starting out with us, then maybe use artificial intelligence less as a means of doing the thing because the AI will take over or do things in place of the learner that we want them to know about.

Elizabeth: That's such an interesting conversation to really think through in terms of, you know, how do we position AI in that learning process? And I think you make such a good point around, you know, we need our students to have that critical thinking space and have that space to really develop those skills for the workforce. But also recognizing the opportunities where AI can help them to revise for content and show them how that's possible. Show them how they can actually have that revision happening in perhaps a way that may not happen for all students at home. You know, some students may have peers who have gone through higher education or parents who might be familiar with that content, they've got their own home quiz master, but not everybody has that. So thinking about the opportunities that could really open up in those spaces now.

Darren: And that brings us to the end of part 1 of our discussion with Dr. Tom Tobin. Please join us for part 2 as we continue to probe the question ‘Is AI the perfect companion to UDL?’. Until then take care and keep on learning.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.