ADCET
ADCET
ILOTA Things: Episode 8.2 - Is AI the Perfect Companion for UDL?
Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI.
This is the second part of our episode titled Is AI the Perfect Companion for UDL?, where we continue the discussion with Dr. Tom Tobin looking at what we can do to get the most out of AI and how we can incorporate it into our UDL practices.
More information including episode notes and links are available on the ADCET website.
Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe.
Darren: Hello and welcome from whenever, wherever and however you're joining us and thank you for your time as we investigate ILOTA things, that is Inclusive Learning Opportunities Through AI. This episode is part 2 of our recent discussion with Dr. Tom Tobin titled ‘Is AI the perfect companion for UDL?’. In part 1 we discussed the emergence of generative AI and some of the history and crossover with Universal Design for Learning. In this episode we’ll be looking at what we can do to get the most out of AI and how we can incorporate it into our UDL practices. Back to you Elizabeth.
Elizabeth: Now, thinking about access, we've had a few discussions on this podcast before, being really present in our minds that not all students actually have access to digital technology and so we have this digital divide already, but now we have a niche within that digital divide, like an AI digital divide. We've also been thinking about bias, so what bias is within and then replicated by the data that the AI is drawing on and producing? And I know that you've been part of many of these discussions across various institutions and given the ethical challenges and assumptions around AI, what might the pitfalls be? What things should we be really considering when we are looking at adapting and incorporating AI into our UDL practices?
Tom: It's a wonderful question to ask because there's so many different answers in terms of challenges or considerations. I don't want listeners to necessarily come away from this conversation thinking, oh, Tom doesn't want us to use AI at all, at the same time having a knowing understanding of it, there's a recent book called AI Snake Oil by two of the computer scientists who actually wrote some of the first artificial intelligence large language model algorithms. And they wrote a book that talks about what artificial intelligence can and can't do. And there's a lot of hype out there right now. You know, you bring AI into your business and we'll give you hallucination free results and so on. And those are artificial intelligence models that have been trained in closed circumstances on industry specific data sets and they work really well. And almost none of us in higher education have access to those.
What we've got is, we've got ChatGPT and Microsoft Copilot and Claude.AI and the ethics here, 1. Water and electricity usage of the tool itself for every time you create a flying unicorn cat riding a rainbow through space, that's approximately 3 liters of water used right, Now I'm using a frivolous example to heighten a point here. But the other part of that is the ethics of training large language models and generative AI models on copyrighted content without consent. The folks at the New York Times have just sued the makers of the Perplexity large language model and they've asked them not to train its large language model on their content. So those lawsuits and those conversations are, I don't want to say, just starting, but they're starting to get prominent enough and have big enough actors that people are starting to say, you know, if I ask an image generation artificial intelligence to give me a drawing that looks like the Indiana Jones movies, that it's actually going to give me something that looks remarkably like a still from an actual Indiana Jones movie because it knows that from its database. And the ethics there is something that people are still puzzling out.
There's also a high prevalence of racist, sexist and pornographic inputs into the most general models. If you ask, especially when we're thinking about images, if you ask most like Microsoft Copilot, give me an image of a teacher, and you want like an elementary school teacher, that's almost always a white woman, give me an image of a college professor, that's almost always an older white man, give me an image of a doctor that's almost always a white person, and give me an image of a black doctor treating black students. It kind of can't do it because it doesn't have that many images in the databases or the models on which it's been trained.
There's also, we talked earlier about the sort of haves and have nots who can afford to use customized and targeted data sets and tools, the commercials about hallucinogen free results for your business. You pay a lot of money to get access to those artificial intelligence tools. Or you're part of a research institute at a college or a university, and the rest of us, we use the free tools, right? We use Claude.AI or maybe Microsoft Copilot is part of what our uni has as part of its set of tools. And those tools aren't as cutting edge, they haven't been vetted as much or as often, so we have to be really on the lookout for the kinds of bias that can creep in there. There was a recent case where folks created a fake reporter ostensibly reporting from the war in Gaza. And it was supposed to be a CNN reporter, and it was a woman reporter with blonde hair in what looked like a camisole with her cleavage exposed, you know, prominently exposed, as reporters would not do in a war zone. And you know, the sort of words that no one would speak were coming out of her mouth. So the ethics of creating content that looks real and sounds real, but isn't and is intended to push a point of view, that's a real challenge in terms of the ethics.
There's also one last one that I'll mention here in one of the challenges, and it has to do with diffusion of innovation. Darren when you and I were talking earlier about the apps on people's mobile phones and how early apps on people's phones were designed for people with technical knowledge, you had to know how the apps actually worked in order to operate them. And there weren't a lot of user manuals lying around either. We are now 25 plus years into the creation of mobile apps and it's gotten big enough as an industry that we're designing for everyday folks. With the artificial intelligence models that we have, we haven't reached that division point. So we're now in a space where there's a lot of people asking us how to train people, how to ask good prompts or how to create good questions for the machine so that the artificial intelligence tool gives us back good information or things that we want. And that use case is we've got all these pieces of content like PowerPoint slides and Word documents and PDFs and we want to have those read out loud as alternatives for our learners. That's something where we can learn how to prompt a system to just do that over and over and over again. Go into this learning management system and everything you find there, if there's a text only version, make an audio version in a voice that people can listen to.
But we need to move beyond that into how do we shortcut already expertise processes. And when we're thinking about that, we think not only how do our students use artificial intelligence or how do we set them up to do it, but also how do we use it as designers. And that's where that UDL component is going to come in.
Darren: It's a really good point and I think the challenge that are there when, you know, talking about snake oil of that every tool at the moment seems to have ‘with added AI’, the new marketing buzz as well, it's the new sales pitch and there's a lot of people being sold on that as well. It has this AI, but again where’s the human aspect in that, are we paying the big fees to get stuff that is hallucination free? Look, and to your point there with, you know, data sets and training, an example that we've asked listeners to do and a few have shared an image with us is to say go create an image or ask one of the image generation models to generate an image of curb cuts. Should be straightforward, there's lots of images of these. I haven't seen one correct so far. It puts wheelchairs, it puts gutters, it puts roads, it puts people on the road, puts sidewalks, it never puts a curb cut in properly. It can put some, you know, tracking dots on the ground, it can put all of those things, but it doesn't know how to formulate it in the sense of how people use it. It's got all the elements it just doesn't know how to put them together, you know. And again, but that data set, you know, is limited.
Show me somebody with a disability. It'll be somebody in a wheelchair or a blind person with a cane. Yep. There's no other reference there for it to really, you know, draw on. And so you have to start being really, really explicit. You know, show me a group of students studying, make sure that there are students of different ethnicities, make sure that there's students using different devices.
Make sure that, you know, you have to really think about it to get what you want out of that. And I think this is where it's becoming, you know, a little bit confusing and overwhelming for a lot of institutions and individuals that are certainly just beginning their journey with UDL. And the AI has come along as well, and it's got all these challenges with that. So without people getting too overwhelmed in this place, you know, is your Plus One approach still very applicable and can we use that with AI as well?
Tom: I'd encourage listeners to take Darren's question and turn it inside out. So many of you are familiar with the plus one idea around universal Design for Learning, that if we're going to provide or design multiple ways for people to get engaged and stick with us when things get challenging, if we're going to give them more than one way to take in information and get content, and we're going to provide them with more than one way to show what they know, we can dive down into those three principles of universal design for learning that then turn into nine different guidelines, and then that turn into 36 different considerations. And that in and of itself feels overwhelming for a lot of people, which is why I came up with that plus one idea.
If there's one way that an interaction happens now, make one more way. Are you covering every single possibility? No. And purposefully so. There aren't enough people, funds, or time in the day to do absolutely every possible use case. And we can use that plus one idea when we're thinking about the use of artificial intelligence in Universal Design for Learning. Are you using the artificial intelligence tools in order to do something that you don't know how to do yet? That's a bad use case for AI in UDL, for students as well as for designers and instructors. Are you Using the artificial intelligence tool to help you skip some early steps that you already know how to do, or are you using it to support you to go the last part of a journey or a process that you already can do, or you would already be using another kind of assistance to do so, using it in its sort of truest form as assistive technology. That's a really good use case for AI in UDL.
And then the sort of shining simple way to start using artificial intelligence in your universal design for learning practices without getting overwhelmed is where are you overworked in your UDL practice? Or where is there so much work to be done that you haven't even started? So the example that we've been using all the way through here comes up here again. In my learning management system, I have created I don't know how many hundreds of web pages, documents, files, audio clips, resources, things I've made for my students. And if I wanted to go back and follow here in the United States, it would be the Americans with Disabilities Act requirements for just basic accessibility, let alone universally designed environments, it would take me years. But we can ask an artificial intelligence tool to say, go find all of the text based things there and create an a spoken audio version of them. Will I be able to quality check absolutely every single one of those files? No. Can I tell if the artificial intelligence can pronounce Australasia correctly? Right. I might want to select a subset of those materials for an actual human double check. But once I've done quality control on some of them, I can be fairly confident to let the tool loose and help me with the base accessibility so that I can then focus on more complex parts of universally designed experiences. So if we're thinking about sort of having a floor of mere accessibility, right, that, that there are alternate versions of things and people have more than one way to get at the stuff, that frees us humans up to do creative things with UDL that the artificial intelligence can't yet do and may never be able to do.
The simplest thing we can do is give the artificial intelligence the sort of simple but repetitive work that would take us a long time and use that as a springboard or a bootstrap for us to be able to do work like addressing cultural barriers, addressing a sense of belonging in the design of our interactions, engagements and materials, which if you asked an artificial intelligence tool to say, help me, you know, with sense of belonging here, it will spit back things from books on belonging, but it won't know how to put them together.
And then the last simple thing that we can do is to use artificial intelligence tools as a means of self reflection or as a means of tool use application. So asking the artificial intelligence tool to do things like we started out our conversation with, help me with a study guide, help me create questions for myself. And the other piece of that is if you can describe a barrier that you're experiencing in your learning environment, either as an instructor or as a student, and you ask AI, has anybody else encountered stuff like this and how have they approached it? The artificial intelligence is usually pretty good at giving you a fair summary of what to do next. I'll end this part of our conversation with an article from Laura Czerniewicz, she's at the University of the Witwaterstrand in South Africa and she recently wrote an article called I've Been Hallucinated. And she said that she was looking through an article that quoted her and another colleague and it cited the article that she and the other colleague had written. And she said, oh, it's really nice that they quoted me here. I don't remember writing that. And sure enough, the artificial intelligence that whoever else was writing had fully hallucinated the quote from Laura and had made up a realistic sounding citation to an existing journal, to an article that doesn't exist in it with her name on it. So the risk that we run when we're thinking about universal design for learning is really right now one of quality control. So if we are asking artificial intelligence to create things for us or with us, we have to have some means of being able to say, yeah, this, this is still the same thing, it's an alternate version of the text that I had. Or this is reasonably the same kind of stuff and I'm, I would trust putting this in front of my learners. So one caveat at the end, but the simple thing we can do, give it the busy work and let us do the more complex stuff.
Elizabeth: I find your conversations in this space just so insightful and really helping to sort all of those contrasting and conflicting things that we're hearing into a really clear plan for action. So I think I really appreciate it, I'm sure all our listeners are going to really appreciate it. And you know what I'd love to know, if we were to apply blue sky thinking the ideal scenario, what would AI and UDL look like in the future for you? and how can we steer ourselves towards that ideal future?
Tom: Oh, Elizabeth, that's a great question. So let's imagine a 15 week undergraduate college course. It's macroeconomics, let's say. I always go to macroeconomics because they talk about the Tobin tax in Europe. So a 15 week free credit college course. Our colleagues at Wake Forest University, they have out there for free a course workload estimator. It's a wonderful tool you can plug in here's how many pages we're asking our students to read in a week, how many pages we're asking them to write, how many minutes of video we want them to watch, and so on and so on. And it will come back with an estimate of the workload for the students.
Now, when you're asking your students to load themselves up, attending class, doing homework, studying generally, all of those kinds of things, and it spits back a number. In the average undergraduate three credit course, the ideal is that students will spend 45 clock hours in class, whether that's in a physical classroom, in the learning management system, in some other technology mediated space, plus about three times that on work that they do beyond the formal spaces of the class meetings. So you end up with about 180 total hours of effort for a class.
Listeners. Why am I going into all this detail if 180 hours is that golden ideal? The average in colleges and universities, and I'm going back to the IPEDS data in the United States here, and you have similar data in Australia and Southeast Asia, the average for colleges and universities is actually somewhere in the neighborhood of 250 hours. We're already overloading our students. And listeners, if you've ever spent weekends grading like a fiend instead of going out and doing things you want to do, you realize this is a problem of our own design. We are all really good at adding things into our experiences for our students and we never, almost never, take anything back out. And this is where universal design for learning paradoxically helps us. By creating those alternative paths for getting people's interest and holding it. By creating more than one way that students can study, and by creating more than one way that they can take action or demonstrate their skills, we actually reduce the amount of workload on them and on us. And the things we're asking our students to do, those are also the things that we're expecting ourselves to give grades and feedback on. Now, that feedback might not be marking up everybody's paper with red or green ink, but it might be construct relevant feedback, and this is where artificial intelligence can help us figure out, are we asking people in our courses to do things just because we've always asked them to do those, or are they really tightly tied to the outcomes and objectives and the skills and knowledge that we want them to demonstrate.
I'm not advocating for a clockwork course where I say do X, you do X, and then you get a good grade. What I'm advocating for, though, is focusing our attention on what we can get rid of in our learning experiences. The reason everybody feels overloaded is because they're already overloaded, both because of their other responsibilities in their lives and because we've designed things that, you know, we can't get that suitcase shut unless we sit on it and, you know, zip it up under weight.
With artificial intelligence, looking for places where we've duplicated effort or where things in different weeks or different units all talk about the same learning objective, and then we only get to objective number six at the very end of the class. That makes our courses iterative and scaffolded. It helps us do backward design. We can say, here are all of the documents for my course. Hey, artificial intelligence, go look in this folder and tell me how much time people are spending on each of the course learning outcomes, and you will get a response that is remarkably useful. It's going to be wrong, I guarantee it. And it's going to be not wrong enough to be not useful. So the fun part about this, if we're thinking about universally designed experiences here, if we allow our learners to show what they know in a lot of different pathways, it's imperative for us to have clear expectations for what we're grading, for how we're going to assess that, and how much effort that really puts in. It could come in to us in lots of different formats or ways. So with artificial intelligence, if we're thinking about three different kinds of responses, feedback or assessments, AI can probably help us with two of them. The first part we've already talked about at length, right? Multipath practice activities. This is tutoring sessions with AI. This is making up those flash quiz cards, or give me this in a different way, or simplify this or tell me the main points. Artificial intelligence can help with that on an individual level, but there's also a second level of inclusive design that we would always want to do as well.
And here is a little bit of blue sky futuring, double checking to make sure that people have the foundations before they move on to more advanced skills or levels of professional conduct. In those cases, these are typically the course paper or the midterm examination or the final examination we give in our classes. Those are typically geared toward demonstrating a small but necessary core set of skills or ideas. And because they're so focused, because it's show me these five things over and over, Artificial intelligence can help us not only with that first level of practice, but also with this next level of double checking to make sure that students have the core knowledge before we let them loose to the next level. This is differentiated instruction with an AI twist to it. By next level that could be going from unit one to unit two, or it could be going from my course into your course next term.
Now, there's also a third level of accessible design where we probably shouldn't or maybe won't ever be able to use artificial intelligence. And that is summative paraprofessional conduct. This is sometimes part of our programs in colleges and universities, sometimes it's not. So if we're thinking about perhaps our colleagues in nursing, they have professional exams that all the nursing students have to sit for in order to obtain licensure. Our teachers have to go through national examinations that show all of those competencies before you can get a teaching license. I'm also thinking here about lawyers, mechanical engineers. Everybody has particular sets of professional skills for which you can still earn the degree, you get the credential, but there's also the licensure or professional conduct element. Those are places where, if we use AI at all, we should be using it as ways to shift people into professional learning scenarios more quickly.
I was just talking with a colleague a little while ago, and the overarching theme of our conversation around AI was using it in a universally designed way via acceleration of the pace of learning. And AI does allow us to skip over some of the manual work that we would do in order to get to a more advanced conversation around a topic or an idea. So if I'm working with my upper division undergraduate students and with graduate learners, yes, I'm going to use artificial intelligence to create something that we can stand on and then critique from an approaching expert perspective, that's the goal of UDL, learner agency. If I'm teaching those introductory undergraduate courses, that's why I won't use artificial intelligence there. If we're thinking about universally designed experiences, we talked a little bit about how do learners show what they know, take action, express themselves, what AI tools could help us in terms of assessment, but creating things that artificial intelligence can't then respond to by themselves? This is a paradox, right? We're asking our artificial intelligence tools maybe to craft prompts or study questions that they might not be able to answer well themselves. For example, asking an AI tool like Perplexity or Claude or ChatGPT to say, read this set of materials, give it the chapters you're asking the students to read, give it the videos you're asking them to watch, and then we ask it create examination questions that require higher level skill, thinking and application. The reason that AI can do that is it can write questions that it itself is bad at answering. Most AI models have been trained on skillfully and inclusively designed resources. They actually do know how to ask deeper questions. But then when you ask the AI, how would you answer that question? It'll probably give you some junk. And so that's a danger for us when we're thinking about universally designed experiences. If the AI is doing something in the place of the learner, that is not a help or a way to lower an access barrier, it actually creates a knowledge barrier itself. If the AI is doing something alongside or in support of or to help, skip over early steps that you already know as a learner, that's an awesome way to accelerate and get to higher level thinking with your learners.
So, you know, bringing that back to what we can do. I fed a question back into the same AI and it didn't know how to answer it because it required creativity, it required ‘what if’ thinking. And AI is really bad at what if. AI knows what is, not what if? And knowing that we can use it to design activities, engagements and assessments in an inclusive way that allow us to refocus some of our creativity that we're asking our learners to experience or to give to us.
Darren: I think that's really important at that point that you make giving some of that time back what it's good at doing currently. And that is that summarizing, throwing in last week's lecture and asking it write a brief summary of last week starting with ‘last week we looked at’, you know, it's giving some of that scaffold and giving a quick reflective exercise for students that we've touched on.
And speaking of the noise, you're talking about, you know, just adding more, adding more, adding more to that workload is a real risk and a real thing that's there so people are feeling overwhelmed. It reminds me of audio engineering hat from a previous profession, you know, what was known as the wrist to elbow technique. Basically you push up all the faders on the board with your arm to make everything louder. And you'd be in the studio and you'd have a guitarist say, I need the guitar louder. What they really want is everything else reduced. So you just keep adding, adding, adding, because that's the natural thing. I want something louder. I need to add not what do I take away? So your point of taking some things away is really valid. You know, what can I do without in this space? How can AI help me do that?
I just wanted to touch on the question that we had for the title of our show, and that is, is AI the perfect companion for UDL?
Tom: Well, listeners, I'm going to nerd out with Darren for eight seconds here, and I'm going to bring it into an answer. Darren talked about the wrist to elbow technique. You can imagine someone at a sound board with lots of sliders in front of them that control all of the different instruments and feeds that are coming into the mains and being recorded and just taking their arm and shoving everything as loud as it can go. If you're curious and you want to know why most music today doesn't sound like it used to sound, go look up dynamic range recording. And back in the day, we would, as sound engineers, move each of those sliders so that the sound mix that was coming through our headphones sounded like the people were actually playing it there in the space. These days, things that are louder online and on the radio get more attention, so audio engineers have been requested and many under protest, just shove everything as loud as it can go. And the softest thing and the loudest thing in a song are at the same volume. And if you're, if you're thinking, yeah, music back in the day had more, more nuance and more soul, blame dynamic range recording trends.
How does that answer the question of is AI the perfect companion for UDL? If we are applying artificial intelligence tools in a rote or a formulaic way to our access barriers, then that's like shoving all of those sliders up to maximum and just making everything loud. Yes, it gets more attention, and you hear a lot of claims about artificial intelligence out there right now, and people are just sort of shouting about it because nobody really knows where it's going, I certainly don't. So when we think about artificial intelligence being a companion for our universal design for learning practices, we have to have someone like Darren, who is an audio engineer who knows how to listen to the signals and how to adjust the sliders based on the circumstances themselves. In other words, we've got fabulous tools and we need people with expertise to be able to use them in a virtuosic way.
Most of us can use them in ways that help us to jump over or leapfrog or start up redundant or repetitive tasks like we've talked about. So in that way, artificial intelligence is a great companion for our UDL practices because it allows us to create a stronger foundation of more accessible environments. It also allows us to refocus our energy away from the simple everyday repeat things that we do to lower barriers and start really applying our creativity and energy to the harder questions that AI, at least for now, can't touch at all.
So I would love to use AI as a companion for UDL, and listeners let me know how you are using it too, and our hosts, because maybe you will be a guest here on the podcast one of these days. I know we're eager to hear those use cases and I would love to hear them too.
Elizabeth: Thanks so much Tom. And I think everybody really appreciates what you bring to that UDL space and how you really support other educators to even just start with the confidence to have a go to take that plus one approach. And you know, I think your perspectives on how we actually have AI as a complement to that, how we actually use it in a really strategic and effective way, I think that's just so insightful.
And something else I'm really, really excited about is that you have a book coming out soon on scaling up UDL. So I'd love for you to tell us a little bit about that.
Tom: Fantastic. This is probably a good way to wrap up our conversation Elizabeth. Elizabeth, Darren, listeners, I'm currently almost finished writing a book called Universal Design for Learning at Scale. It's an advice guide for Presidents, Provosts, Deans, VCs, Boards of Trustees, people who are campus leaders.
It makes the argument that when we approach learning, interactions of all kinds, not just things that happen in the classroom. How do our students learn from us about how to navigate our institutions and their systems? So when they're working with librarians or they're standing at the registration desk, or they're on the phone with information technology on the help desk, they're learning how we do what we do at our colleges and unis. And if we approach all of those learning interactions from the lens of lowering access barriers, how can we make it a little smoother for folks without necessarily giving up the complexity and rigor that's necessary for those things to happen? Now that's an essential business decision. It actually helps save us time and energy and money in the long run. And I say in the long run on purpose. Listeners, I'm holding up an index card here that has a post it note attached to it at an angle. Darren and Elizabeth are smiling seeing this thing. I was on one of my long fitness runs training up for a half marathon about nine months ago, and the idea for the book finally just gelled. I was only about half a mile away from my home, I found the energy to really sprint because I knew I was going to lose this if I didn't write it down. And I'm holding up this index card that has the five parts of the book. So listeners, here it is in miniature before it's even published next year in 2025. If we're going to think about universally designed learning experiences as an institution, we have to do five things from the top, because grassroots gets us only so far.
We have to first make UDL possible. We have to have people and resources in place that support basic accessibility. Things like captioned videos, transcripts on audio files, alternative text on all our still images on our websites, having more than one way to get at information that we're presenting to people. A lot of this is just legal compliance and goes beyond into just general accessibility. Now, you can have an accessible campus without doing UDL, but you can't do UDL without having that base level of accessibility. So we have to create conditions where it's possible to do UDL. So that's 1.
2 out of 5, we create conditions where UDL is permitted. This is where we update our policies to include UDL, where we set learning goals, we make it part of our strategic plans, those sort of things. Now, this step takes the longest amount of time, so it's important to do that early in the process of change. I'm thinking of a book by Dan and Chip Heath, it's called Switch: How to Change When Things are Hard from 2010. There's psychology behind this. How do we make changes in institutions that move glacially, slowly and don't like change? We're going to talk about that, you know, in the next portion.
So from possible to permitted, then we have to make UDL supported. This is where we put universal design for learning expectations into our standard job descriptions. We make it part of the everyday practices of our institutions. This also means that we take the burden of inclusive design off of only instructor shoulders and distribute it among all of our staff. If an instructor goes to media and says, hey, I want to do a flipped classroom, they say, fantastic. We'll send a camera person and we'll help you break that up into four minute segments. We'll help you with the captions too. That's just what we do. This step moves from this is a burden or this is extra work to this is just part of what we do, this is everyday. And it's distributed among us.
So possible to, permitted, to supported, number four is rewarded. How do we create UDL as part of the reward structure at our institution? How does that mean if we're lowering access barriers, that doing so counts toward promotion and tenure for our instructors or the merit and raise system for our staff members? It counts toward bringing back our part time instructors or maybe moving them to the front of the queue for selecting their next courses for the next term when they engage in inclusive design practices. In terms of reward, how do we make it part of the promotion and salary progression for staff members?
And then the last part possible, permitted, supported, rewarded, the fifth one, the last one, expected. When we do the new faculty orientation, when we bring someone on board to our institutions, if we've established that culture of universally designed experiences, people coming in new will just look around and say, oh, everybody's doing this. I guess this is expected. So we put that into the orientation for new employees, into the new faculty orientation, and then we celebrate it. We make awards for inclusive practice. We do the scholarship of teaching and learning to write articles that celebrate the work that we're doing to lower those barriers.
I'm thinking of Goodwin University in Connecticut in the United States right now. They've gone almost all the way through this whole process and I'm very proud of the work that they've done. So listeners, watch out in 2025 for this new book, it's called UDL at Scale. And Elizabeth and Darren, thank you very much for having me on the show. This has been fun. And listeners, you know that I always want to hear what's going on with you too. So if you go to thomasjtobin.com you'll find a lot of different ways to get in touch with me. I'd love to hear your story about how you are using UDL and AI to lower barriers for your learners.
Elizabeth: Thank you so much Tom for so many valuable insights today. I'm absolutely buzzing and I can imagine I'm going to listen back to this just to soak it all in again. I am really looking forward to reading more about all of those aspects that you're bringing to scaling up UDL, thinking about what's possible, permitted, supported, rewarded and expected. And I'd love to really thank you for all the work that you do in this space, particularly around that supporting, you do so much to support individuals and institutions to do this work and it wouldn't be as far progressed as it is without all of your efforts, especially here in Australia. So, a big thank you to you for that.
Now I'd also like to mention that our listeners are probably very used to having these three particular questions covered in each of our episodes. So usually we try to get to, you know, why is whatever topic we're covering, why is it relevant, important, challenging?, who does it benefit? And also what's the role of humans in this AI discussion? And I think you have gone above and beyond in addressing each of those questions today. So thank you so much. And look, if people do want to get in contact with you, which I'm sure they will, you have such a huge fan base here, we're going to make sure that your contact details are in the show notes on the ADCET website.
Tom: Splendid. Thank you very much for having me here and for acknowledging that this, it absolutely takes a huge number of people to create the expertise and the conversation. So I always love hearing people's stories and I'd love to continue this conversation listeners with all of you.
Darren: Fantastic. Thank you so much Tom, particularly for allowing, as Elizabeth said, to pick your brains and for being, if I can steal the phrase from you, for being our Plus One here today on ILOTA Things. We could continue talking of course for another couple of hours, but unfortunately that is our time for this episode. You can get in touch with at feedback@ilotathings.com or you can visit the ADCET website for more details on that. So thank you everybody for listening and we hope that you can join us next episode as we continue to explore ILOTA things.
Till then, take care and keep on learning.
Tom: Cheers everybody.
Elizabeth: Until next time.
Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.