ADCET

ILOTA Things: Episode 13 - The Future of UDL in an AI World (with CAST)

Darren Britten, Elizabeth Hitches Season 1 Episode 13

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this episode, titled The Future of UDL in an AI World, we're going to chat about AI and UDL and ask what does education look like in an AI world. In this episode we are joined by Michelle Soriano and Bryan Dean from CAST who help us break down this topic and provide some real-world insights and tips for supporting educators and students.

More information including episode notes and links are available on the ADCET website.

Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Elizabeth: Hello and welcome from whenever, wherever and however you are joining us and thank you for your time as we continue to investigate ILOTA Things, that is Inclusive Learning Opportunities Through AI. My name is Elizabeth Hitches and joining me on our artificial intelligence, universal design and accessibility speeds gates is my co-host Darren Britten. 

Darren: Hi there.

Elizabeth: And joining in spirit, Joe Houghton. In this episode titled the Future of UDL in an AI World, we're going to dive deep into the UDL 3.0 guidelines, look at some of the real world applications of AI and accessibility to further UDL practices. Now, in order to get into all of the cracks and crevices of the UDL guidelines, we really need some well-seasoned and well-respected experts in this space, and who better than Michelle Soriano and Bryan Dean from CAST. CAST is known around the world for its proactive approach to reducing barriers for students. CAST were the founders of the Universal Design for Learning and they now support practitioners of the UDL right across the globe. Michelle is a professional learning specialist on the accessibility team at CAST and technical assistance support for the Center on Inclusive Technology and Education Systems, known as CITES. Bryan is CAST's innovation specialist and a man of many hats, literally. So it is my pleasure to welcome Michelle and Bryan to our discussion today.

Michelle:  Hey, hey, hey, everybody. Michelle Soriano here. Thank you so much for having me. I am really looking forward to digging in and just having some great conversations.

Bryan:  And I'm Bryan Dean, and I'm so stoked to be here. As I always say, I'm hat’ed, tatt’ed, and ready to get at it. 

Elizabeth: Thank you so much for joining us. Now, firstly, we'd love to hear about your work at CAST. So I'm going to start with you, Michelle. What does your role as a professional learning specialist entail? And thinking about the UDL Joy Principle, what brings you the most joy in that role?

Michelle:  Well, I love that question. So, you know, the great thing about being a professional learning specialist is that I get to really navigate through many different aspects of just supporting educators, supporting vendors, supporting individuals around accessibility, universal design for learning. So whether that is providing coaching, training, really just being a thought partner, building horses, just finding ways to share the good news, the information, the supports, because I was first and foremost, I am a mom, and then second, I'm an educator. So I've been in the classrooms, I've been in the trenches. I know exactly what the day to day planning is, and I'm married to an educator. So we still eat, sleep, breathe education on the daily basis. And so knowing that there are some things that we can do, we can design to support learners, educators across the globe is just amazing. So I just have fun. I love to bring myself. I love to just meet people where they are. I like to consider myself as a BFF. I'm going to meet you where you are. We're going to learn and grow together. And that's just really what gives me joy as well. I feel like I was put here on earth to support educators. I feel like that's a passion, I feel like it is a calling to just help educators so that we can help all students succeed. And that is the bottom line. I just love kids. I love learning. I love supporting one another. And I think that we're better together always.

Elizabeth: I love that, Michelle. And the description of being a thought partner, I think thought partner is just the best way to conceptualize that. So I think many of us here are thinking, wow, we could really do with Michelle as our thought partner.

Michelle: I love that.

Elizabeth: And Bryan, what does your role as an innovation specialist entail and what brings you the most joy in that role?

Bryan:  Well, first of all, I want to thank you for asking me the question after Michelle, that's fun to follow up. Like, you know, I love Michelle, Michelle is like sunshine to the soul. So I always love hanging with her. Um, and, and quite honestly, I mean, that's part of my role is like, I get to work with these real, like really incredible educators that may be at CAST or may not be at CAST and really brilliant partners. Uh, and we get to find human centered solutions to human centered problems. And that, that to me, that is, that is the core of whether it's UDL or, you know, cultures of thinking or, or whatever progressive asset based pedagogy you want to go after, that's what it is. It's about humanity and it's about. UDL isn't just about learners, UDL is about the human, the human and humans finding better ways to live and including all humans in that life. And so what brings me joy, just the essence of what I get to do, right? Like if I were to give you my day to day, it changes every day. But I get to work, I'm like a human Swiss Army knife, I guess. I mean, like a UDL Swiss Army knife, right? right? Like, I get to look at problems and say, what's a funky solution that we might want to try? Something different than we haven't looked at before. And then I get to dream it up and work with great people and making it come to fruition. So if you ask me what brings me joy, it's that and the absolutely incredible paycheck. No, I'm just kidding. I'm just kidding. That's like a donation almost. Yeah. No, but it is the. It is definitely thinking through problems and finding different solutions to those problems that we hadn't really thought of before.

Elizabeth: I think if there was more human-centered design across the whole of the higher education sector, then, you know, maybe some of those barriers in systems that have been there for a very long time, you might be able to see them and see the human in the experience rather than, you know, sometimes I hear academic staff or students feel almost overwhelmed by processes and you think, yeah, it's got the human at the center. It's, yeah, it's such an interesting approach and one that's really great to be reminded of as we enter this space of AI and accessibility and UDL.

Bryan: Systems have a tendency, because they're a collection of people, they have a tendency to want to, like, they want to perpetuate themselves, right? And oftentimes when they perpetuate themselves, they perpetuate themselves much like AI in the most efficient way possible. But sometimes that leaves this very human side out of it. And then the system is driving people instead of people are really informing the system. So UDL, I think, disrupts that and gives us pause in that. And I think that's one of its true powers.

Darren: Absolutely. Thank you, Bryan. Look, I think you've hit the nail on the head often for efficiency, the humans left aside. And that's unfortunately what we've seen over a long time for certainly some marginalized students, particularly with my hat on students with disability, efficiency has meant quite often that building accessible systems or accessibility into the curriculum is an afterthought or it's extra work or it's seen as a whole range of different things. And UDL hopes, certainly addressed that. But one current joy that I get, I think, in this is around these opportunities that AI brings to that accessibility space. Now, look, we often hear certainly the adage that you can do accessibility without UDL, but you can't do UDL without accessibility. and given the work that CAST does, particularly with UDL and helping educators create accessible educational materials and experiences, can you tell us why for you, UDL and accessibility is so fundamentally linked and important?

Michelle: Yeah, that is such a great question. And boom, it's true. You cannot have UDL without accessibility. When we think about accessibility, it's really important that we're understanding what is it that we mean when we're talking about accessibility. Right?

So we like to think of accessibility as any individual, whether they have a disability or not, they should be able to acquire the same information, engage in the same interactions, and enjoy the same services in an equally effective, equally integrated manner with substantially equivalent ease of use as an individual who's not experiencing that same barrier. And so when I think about that and I think about the purpose of universal design for learning, right? It is to be proactive in designing for every individual to have opportunities to be able to engage with, to relate with, to make sense of, to learn, to grow. If we are not thinking about accessibility from the lens of individuals with disabilities, from individuals who might be English learners, for individuals who might have just a preference of moving around in the learning space. We are not universally designing. And so we cannot say that it is universally designed without thinking through the components of accessibility. Is it perceivable? Is what I'm designing and presenting for my learners, can everybody take it in? Can everybody sense it in a way that's relevant to them? Is there a way for people to feel safe in the space to respond? These are all components of accessibility and they're layered the components of Universal Design for Learning. And so we think about accessibility as being that foundational piece. And although if you look at the Universal Design for Learning framework, you will see that that first row is identified as the row of access. If our learners do not have access, it doesn't matter how engaging we designed it, they are not going to have access to it. If they don't have access, it doesn't matter that I did my best to make sure that it was a large font because they might not have the assist of technology or the technologies that they need to engage with it. So they go hand in hand, right They go together up like ba, ba, ba. *sings* You can't take them apart. They have to work together. Bryan, would you add anything to that?

Bryan: Yeah, I would, I mean I don't think I need to, but I will always jump in, and I will say this um like the straight, the straight dope of it is it's not about whether or not you're gonna need access or whether or not you're going to need assistive technologies or whether or not you're going to need AI. It's a matter of when. Right?

And that's a key point. And, and I think that when we look at, when we look at accessibility, we look at that route as well that Michelle mentioned, what does access mean? And then when we think about that and we think about that, in this idea of education being this human endeavor and being emotionally inherent and inherently emotional work and it being about making lives better, access is a key point to that. You can't get to it and you can't enrich the wealth of knowledge without having people who have access and means to get into it. And so whether that's some kind of physical access or that's some kind of intellectual access or that's some kind of neuroinclusive access, we have to design with that in mind because we don't know who's going to use it. We don't know when they're going to need to use it, but we can build that in as part of really strong design.

Darren: Yeah, fantastic. Look, and while we're on, I suppose that topic to bring up with AI coming into that space, where does that sit with UDL and accessibility? And what are the benefits that you think AI brings that you'd love people to be aware of?

Bryan: So I, I have to tell you, I, I love, you know, I like this term there, this idea of artificial intelligence, but I think it's wrong. And I know I might be the only one that says this, but I think that the concept of artificial or the, or the, the naming of it as artificial intelligence is something that we have to look at. And, and this leads into why I think UDL and, and AI are so, so entwined with each other, because the real idea of, of what happens with AI is that AI is just a generative learning algorithmic search that brings you information, consolidates it, and gives you an output of some kind, right? So it's not thinking through all of that, but what it does, it does it so beautifully and with such, we can use a normal language input to do that, and it searches everything so quickly that it feels like it's another thinking partner. But really what it is, is it's really assistive intelligence. It's assisting us in the work that we do. AI does is it brings that all back to you and it takes a lot of that cognitive load that is kind of extrinsic and that cognitive load that, that we have used to, to not necessarily talk about the things that subject matter and it, and it takes a lot of that work off. That's great. What it then does is it frees up a student or a learner or a person, anyone, to put more into that germane cognitive load with learning and understanding and synthesizing. That's why it's assistive, right? And not why it's artificial. It's assistive because we work with it. And when we take a look at UDL, UDL has these great principles of what is engagement, right? And what is action and expression and what is multiple means of representation. Well, AI has given us this great ability for us as humans or us as the learner to advocate for different ways of representation is taking the entire focus, not all of it, but a large part of it off of the product that is being produced and put it more on how do I really, how am I crafting what I'm asking AI to do? That's wildly different than what we have seen in that education before, right? 

Before we've said in representation, well, you can make a PowerPoint or you can make a video or you can write a paper. and that's what we judged as, as the basis of knowledge. Right. But now we get to look at what the input is. And it's one of the first times that we've really been able to see what the input to the AI output is. Right. And then it can build me a video game. That's really cool. Right. But what did I have to ask it to do? I had to ask it to build a world. I have rules for that world to come up, you know, help me come up with a way of scoring, a way of all these different pieces. You cannot tell me that that is not some type of deep analytical thinking. That's DOK 3 and 4. That's depth of knowledge 3 and 4. And what AI has allowed us to do is to see the product of that, but also to allow our students to dive right into that and show us what they really know in building the prompt to get to that. So now we're judging a lot of different pieces, and that's true access again, right? Because now we're looking at all of these pieces that come together. That's just a service.

Darren: I couldn't agree more. I don't think you're alone in thinking that, Bryan. I've been certainly pushing for some time. AI is really just the next assistive technology, and they're so integrated, you really can't separate them now. But I do like your word, and I'm going to steal that, that it's assistive intelligence. Everybody's going to be using AT. That's the whole thing. It's getting into everything. So let's normalize at the same time. To you, Michelle, the same thing. What wonderful benefits do you think AI is bringing into this space that you'd like people to be aware of?

Michelle:  Yeah. So, you know, I think that I'm gonna one plus Bryan, of course, because Bryan has taught me so much in, in this world of AI. But I think from the practitioner standpoint, right, which is what I relate most to, is I think AI just brings opportunity for us to have that assistance in managing things and thinking through, right? You're not always going to have a thought partner with you in person every step of the way, but I do have a thought partner that I can bounce things off of through AI in some instance. And if I'm directing it and I'm guiding it and it's allowing me to have some sort of, counterpart of thinking, I think that is a win for us all around. And like I said before, you know, I'm a mom. I'm a mom of four. Two of my kids have gone through a system of whether it's being, you know, providing, you know, and receiving special education services or 504. I have another kiddo who is identified as gifted. You know, it doesn't matter. We need supports and understanding how to provide scaffolds, how to provide just different options, even. Right. My mind gets to a point where it's like, oh man, my kiddo has this thing for homework and, I don't wanna, you know, overstep and do it for them, which hint, hint, I have done that in the past because I didn't always have the resources or tools, but what can I do to support them? What options? Because I'm out of ideas. Well, guess what? I can go into my nifty old AI and, you know, give it a question and say, what options can I provide for scaffolds? And poof, it gives me things. So I think there's lots of opportunities in every single role that we are bringing to the table to really leverage it and utilise it in a way that’s meaningful, that’s relevant, that is in realtime support and thought partnership and that is something that we havent’t always known that we’ve had. You know, AI has been out there forever, but we've not always known about it as much as we know about it now.

Bryan: And if I could just piggyback on that, I think that that's our next frontier. Our next frontier isn't building, isn't necessarily the building of more AI models. The next frontier really sits in how do we in AI literacy and AI use and how do we help students and ourselves build really great prompts, right, that elicit what we need that also work on things like metacognition and work on things like executive functioning, right? Like if I want to ask AI build a study plan for me, and I only have three weeks to do it, it can do it. But if I use the right prompting, I have to put in how much time I'm willing to dedicate to it during a week, right? Each time I study, how many times a week, right? How many weeks do I have to finish this? And I need a progression. Well, if I'm, if I have, if I have difficulties in executive functioning, what I'm actually doing is I'm building my executive functioning skills while I'm building the prompt. Right? That is the type of thing that we need to look into as our next, like our next evolution of AI. It's not about catching our kids, our students cheating or us cheating. It's about us really looking at how do we become connoisseurs and not just consumers, right? How do I know that this is the right thing to use for this and what does it help me build? And teachers, educators, are always going to sit in that. They're always going to sit in a central place in that and helping students. The scaffolding hasn't changed, right? That's not what's changed. It's our ability to be more transparent with it.

Elizabeth: I think that's a great point, Bryan. And I think, you know, I heard this really interesting quote the other day. It was about that fixation of us with, you know, students are going to be cheating if they're using AI. And the quote just said, no, this is not an AI problem. This is a cheating problem. If it comes to severe academic Integrity issues, if a student is just going to use it to write a whole essay in a few seconds and then hand that in, that's not an AI issue. That's a cheating issue. There are ways that we can use AI that actually support the process in a non-cheating way. But anyone who's just plugging in and submitting that assessment, that we can't be looking badly at AI just because of the potential of it. Yeah, I think it's a great reminder.

Bryan:  And we have cheated for a long time as humans, right? Like in schools, we have done that for a long time, whether you paid another person to write your paper or you took your older sibling's paper and just rewrote it and turned it in for the same class or whatever it may have been. We have cheating has always been there. So I don't think the issue is cheating. I think the issue is the, the ability to do it so quickly and the, and the fact that it's available to everybody to do that. But once we really kind of like start siphoning through what a common, what's a common language that a general chat GPT response will give me? Or how do I like, what's the phrasing that I know that chat GPT's voice for writing is always going to be in? We've kind of come back to that point where we're minimizing the cheating that's happening. Instead, this is a great opportunity as, for us to really start using the tool as a teaching tool, as a thinking partner, like Michelle had said. Right? And who knows where we're going next? Prompting and literacy are the next horizons that we really can look at, and UDL fits those so perfectly.

Elizabeth: Absolutely. And as a mini segue between this and our next question, I'm going to mention a tool that I think it could have either been Darren or Joe introduced me to called Goblin Tools. And so now, rather than a student just getting overwhelmed with an essay task and having to go and you know, reach that point where there's a time crunch, they don't want to fail, they turn to AI to do the thinking and the writing for them. You can have a tool like Goblin Tools where you can say, I need to write this essay. What are the steps? And it'll break down a list, like a to-do list. And suddenly that overwhelm of like, where do I even start? Suddenly you have a list and every single step you can dive in further. So if it says, explore your topic, and you think, How do I do that? That's what I'm stuck on. You can explore the steps in that even further. So that's one of my favorite tools to mention for the moment. But I'd love to hear about your favorite tools. So I'm going to start with Michelle. Michelle, do you have some favorite classroom examples of AI or some favorite tools or some really quick wins for teaching staff who are just starting to reach into this space?

Michelle:  I love this question because it's like, give one, get one, right? We're all gonna share ideas. Well, I think probably some of the best tools that I've been introduced to this year have been like NotebookLM. And it is amazing. So we've talked about how educators and all of us, we are time crunched, right? There is just something about the time and day is not the same as it was whenever I was a kid and it lasted forever. I don't have enough time in the day so I can upload you know, PDF documents, Word documents, any type of information into this AI support, this tool, NotebookLM, and I can ask it questions about the topic. I can have it provide a summary, but let's just say I am doing my mom Uber time where I'm just driving my kids around and I don't have a lot of time to read with, you know, read with my eyes. Well, I can read with my ears by uploading it and having it do a podcast. and a podcast can summarize the information. I could be listening to it as I'm dropping my kiddos off wherever they need to go. It is a win-win-win. Not only that, but if we think about universal design for learning and we think about, you know, we want to practice what we preach, right? We want to have options in how we navigate learning. We want to have options that are purposeful to our preferences and our needs. This provides us a way to go through and personalize those options in how we're learning and how continuing that learning process. And there's other things like Magic School AI. And, I told you before, I'm married to a teacher and so lesson plans are still a hot topic in our household. Magic School AI, it can help you with that lesson planning process. So as you're thinking about, you know, pulling in opportunities that maybe you've never offered before in previous years, and you're like, man, I want to do something a little bit different with this standard that I'm teaching. Magic School AI will help you. It will allow you to plug in the content, ask questions, really provide the goal that you're trying to achieve, and it can help you in that lesson plan writing. And then in my day-to-day basis, y'all, I love co-writer. Just a simple co-writer. I can write an email and then I'm sitting here thinking, I want this to sound just a little bit more professional. I can ask it, Hey, here's my email. Can you help me identify ways to make this sound more professional? And not only am I using that and then I still edit it and go through it, but I'm building up my vocabulary like nobody's business, right? Because we're all learning new things on the daily. And so when we leverage AI and we're able to use it to help us grow, I mean, mic drop, that is just a win all around. So those are some of my favorites.

Elizabeth: That idea coming through from you, that there's a difference between passive engagement with AI and active engagement with AI. If you're engaging as an active learner, learning through the process, being part of that process rather than just being a receiver of what it spits out. It's a very different thing and there are so many benefits to that. And I think that's maybe something research should explore. So just putting it out there, anyone researching in the AI space, need to do some work looking at active versus passive AI use because I would love to read about that.

Michelle:  Great idea. Great idea.

Elizabeth: Now, Bryan, I'd love to hear from you. What are your favorite tools or quick wins?

Bryan: Well, so I'm. I'm gonna break script a little bit, to be honest with you. I'll give you, like, I love Goblin tools. I think Goblin tools is brilliant. If you're trying to break down tasks and you, and you are in that neuro-inclusive space and you have a little bit of neuro Sparkle, which we all have, right? We're all a little neuro sparkled. Task paralysis is a real thing, and I come across it all the time. I know that it may not seem like it, but I'm pretty severe ADHD. I know, don't be shocked. No, I know that everybody knows that, so like task paralysis is huge things for me. Goblin Tools helps me break that down. 

The other straight tool I'm going to give you is if you like magic school, I'm going to give you the steroid version of that, and that is EduAide.ai. EduAide.ai is one of my favorites. It goes for everything from to planning, to feedback, to design, help you design feedback assessments, questions, games, activities, practices, presenting, you know, just presentations, whatever. But it also does it in a way, then it breaks it down by like lesson frameworks or foundation frameworks or unit frameworks. Then it also breaks it down by like, am I looking for something that's a UBD or like a backwards design? Am I looking for something that's a Montessori based? Am I looking for something that's just direct instruction. Am I looking for something that has a UDL base? It'll tell you, it'll start to plan it out for you. Are they perfect? Of course they're not perfect, and none of the tools are, and they shouldn't be, because the minute that they are perfect, then is the minute that we don't pay attention to them. So those are two tools that I would say. But I really, like I said before, I think the next frontier is really advanced prompting. 

Two of my favorite prompts, one of them is called PECRA, and it's P-E-C-R-A is the acronym, and what it stands for is your purpose, your expectation, your context, your request, and then the action that you want from it. So the purpose is the reason for creating the prompt. It helps you clarify what you're going to do. The expectation describes the outcome of the type of response that you're expected to get from the model, and then the context provides additional information. The request is to clearly specify what is being requested from the model, and the action is to indicate specific, like what you want the model to do. So as an example, the purpose, like if I want to write a prompt, PECRA, I want a prompt that will help a student effectively prepare for a math exam, kind of what we were talking about. And I, my expectation is I expect to receive a detailed study plan covering the main topics needed for this exam. The context is that that the student has two weeks until the exam and can study approximately three hours per day and struggles mainly with geometry and algebra. My request is based on this information, create a study plan. And my action is organize a plan starting with algebraic fundamentals and then moving on to geometry, so on and so forth. When I put that in, what I have done is I've really supercharged, whether it be chatGPT, Claude, Gemini, Copilot you know, your mom's basement AI model, you've supercharged it, and you've asked to really think about what you're doing. That's a power. So that's one of them that I really, really love. The other one that I dig is this idea of, it's called PAIN, P-A-I-N, and it stands for Problem, Action, Information, and Next steps. And so same kind of idea, action is what I'm looking for, information is what I put into it, the next steps are tell me what I need to do in an so if I, if my problem is I'm struggling to manage my time effectively, my action is I need a personalized time management plan, the information, what strategies or tools could somebody recommend? And then my next steps are provide a step-by-step plan that I can start following immediately. Again, this is a supercharged prompt, right? Because it gives me actionable steps, breaks it down so it's like your own, it's like your own tailored goblin tools. Right? Especially their time management portion, and like, you can use that in any of the models that are out there. Building new models isn't important anymore. Building really great prompts is. Like that's prompt engineering is really powerful. And so those are the three, those are the four things that I would offer.

Elizabeth: That's a great tip Bryan. And I think, you know, sometimes people might play with the AI tools and try to use it a bit like a Google search used to be and do almost like a Google search and then wonder why you're not getting all these amazing things that other people are talking about, but you're so right, it comes back to that prompt and being as specific as possible. I think what you're really hitting on is that fact that you can't over communicate your wants to an AI model and they're not going to be offended if you go down to the most specific detail and really emphasize exactly what it is you want and how you want it. It really helps. And I think that's a really great reminder, especially for students who might be starting in this space.

Bryan: I think that's where our next level is. And I know I've said it a million times and probably ad nauseam on this podcast, but to be honest with you, teaching our students how to use a tool is far more important than teaching them a new tool. Right? And what ends up, what it really, when we get into advanced tools and technology like AI, it is really about routines, right? It is about, we're going to use this thing specifically at this time. It's just like a thinking routine that we would use or a habit of mind routine that we would use. AI's prompting is just that it is a routine. And pretty soon what you'll see going back to classroom examples is you'll see students say, you know what? This is actually, this is probably better if I use an ACDQ prompt or I use a PAIN prompt or I use a, you know, a create prompt. And then they start to really see the routine in what they're using. And that's how we get really great responses and really great outputs. But it also is how we get really detailed inputs which show again, going back to that first point, show what students are putting in the information that they have that they've learned and showing what they're putting in as well as what they're getting out.

 

Elizabeth: Absolutely. And that really makes me think about something we were touching on earlier. You know, there's a lot of concern around academic integrity and AI mainly because of the availability of these tools that, you know, maybe a student wouldn't be able to go and pay another student to write the assessment, but now they could have a free tool that'll do that for them. So there's more access to that if they wanted to take that more passive route. But what I'm really interested in is what else needs to be brought to the table, so when executives are having meetings in higher education institutions, academic integrity is on the table, but what else needs to be there when we're thinking about AI?

Michelle: That's a great question, and it's something that I think there'll be like a never ending thought process to, and it's going to constantly be evolving, but I definitely think when I think of AI and I think of higher Ed and I think of training, one, I think of the element of bringing in the how to use AI within the assessments. Right? We need to be training Educators, future Educators, students. How do you use this, because let's just be honest, they're gonna use it, right? It's here, we're gonna use it, so how can we train them to use it effectively? 

I also think, you know, data privacy and security. So we know that protecting a student data is paramount, we know that that is the highest concern, and we know that AI systems often require really large amounts of data, and there's raising concerns about how this you know, data is collected and stored and used. And so I think institutions need to consider robust policies, maybe some safeguard policies, and how do you keep sensitive information contained? How are we making sure that we're being responsible and how we're using AI? I know there are some things out there where, you know, it asks for specific data. How do we make sure that we're being responsible and how we're leveraging that AI? And then also thinking about the pedagogy, the impact of the pedagogy. And we know that AI has the potential to transform teaching and learning by providing personalized learning experiences. We know that we can use it for tutoring systems, as Bryan mentioned earlier, creating study guides, predictive analytics, there's all types of things that we can leverage. However, it is crucial to balance AI use with traditional teaching methods. I remember when I first started teaching and I started out in a school district that, you know, didn't have a lot of funds. It didn't have a lot of resources, actually nilch, I didn't have any. And so I was forced to really think outside the box and how I was going to engage and how I was going to really bring my teaching to the forefront. And now, you know, when I moved into another school district that was like just piled up with technology, and that was what they relied on when the technology went down or the internet went down. People kind of got freaked out, like, oh my gosh, what do we do? We don't have our technology. So we need to have that balance in how are you leveraging AI, but we don't want to lose that uniqueness of how do we bring things, you know, to the table? How are we using that traditional teaching as well in addition to AI? so we don't want over reliance on that technology, and then also, I think just the human AI collaboration. I think this ties back to the, you know, how are we teaching people how to use AI? Because AI should complement, right? It should be like Bryan said, assistive intelligence. It should complement, it should not replace human educators, human capacity to kind of pull in the specifics for their learners, their preferences, their needs. And when we are able to promote collaboration between AI and human educators and instructors, that can enhance just the overall educational experience. So I think that is a conversation to be had as well, is how are we making sure that individuals know how to appropriately collaborate with AI.

Elizabeth: You cover so much breadth there that really needs to be part of these conversations, thinking about how we making sure that we still have our, our teaching and learning muscles flexed. I think of it almost like astronauts, you know, astronauts go into space, they're in a weightless environment, but they still train their muscles to be sure that they're not losing too much muscle mass in that journey. And you're exactly right, Michelle, if we go all the way to AI and we forget that we're a partner in this, then as soon as there's a blackout, as soon as the power goes out, and people are going to really forget all those skills that they rely on. So, I mean, that's a really great reminder. We need to be a partner in that. And that human AI collaboration is a beautiful way to put it.

Bryan: I would always tell people this. AI has no conscience. Let's remember that. It doesn't, it wants to perform your task as efficiently and as quickly as possible. You would never ask AI to, to come up with a defense system for your country because it's going to come up with the most, the most efficient way to do that, which might not be the most moral or the most ethical way to do that. Right? Same holds true for anything else. You may have AI write up a business plan for you, but, but it's going to write up the way to maximize your profits the best, right? And it's not going to take into consideration the humanity of it. What we need to develop or what we need to start thinking about is how does AI not address things like identity and bias and who I am in my society and who I am in my role and think about how do we help, how do we augment that? How do we take AI outputs and we look at those with those kind of lenses or how do we build better, if we're going to build better AI models, how do we build them with that in mind. I think also this idea of literacy comes up over and over again, and I think that we've covered a lot of ground on that. But I think we also need to give ourselves grace and we need to understand different levels of AI. One of the top tenets in Andragogy is to respect people's time. And so let people use AI to do some of that, again, some of those extrinsic cognitive load pieces. But while doing that, if let's teach the scaffolding of that, that's a, that's a level one AI tool, right? What does a level three AI tool look like? Do we have them yet? I don't know. Level two, where we're considering, you know, where we're bringing more in and we're saying, like, write a research paper. Well, deep re Gemini's deep research is dope, like, I think it's cool, but it's only cool to a certain point. Right? I still gotta go back in there and take a look. But as far as crunching, crunching info and bringing it in, yo, it does it quick and it does it really well. And it tells me its entire research plan. Get out of here. That's great. But if I'm just evaluating it for a research paper, that's. I'm losing my ground on that. Right. That's. That's going to always be academic Integrity. So let's respect time and then let's go into some deeper learning with it.

Elizabeth: I think something you really hit on there as well is that we do need to be conscious about biases that could be within the model and absolutely not the model's fault. If we fed it only articles from the media, then anyone who is not represented or is represented in a deficit way in the media would then come out that way in the model.

Bryan: Yeah.

Elizabeth: It's like if we trained an AI only on one movie, the only source of information about the world that it would have would be that one movie so everything would be viewed through that lens. I think it's really great to think about that because sometimes I hear conversations where people might be thinking AI is really objective. You know, it's objective, it doesn't have an agenda, so it's just putting things out, it's putting things together in a really objective way, but it's drawing on the data that isn't objective, that is full of different human biases.

Bryan: Right? Yeah. AI is objective, but you aren't and that's okay. Like, the other thing that we get very scared of in design is that we get very scared of our bias coming through. You're never going to get rid of your designer bias. It's okay that you have it, but what is not okay is not recognizing that you have it and not looking for it, right? And designing to minimize that bias. But knowing that you're like having it is what you're going to have, right? And I mean that in the sense of designer bias. I think that when we start putting in things, we assume that AI is going to know what we were talking about, right? Or know that we didn't mean it in that way, and that's just not true, AI just doesn't really think that way, right? Because again, it's just an algorithmic, it's just a generative algorithmic search, right, and compilation. So it's not really like saying, I know you, Bryan, we've talked for a long time. I get what you're saying. Let me handle this, homie. No, it doesn't do that. It just says, it's what you want, and here you go. You let your bias sneak through. So again, always, it's assistive. 

Darren: Excellent thank you Bryan. Look, one thing, just bring it I suppose back to students, and particularly the new divergent students, and some of these Gen AI tools that are being used certainly for learner personalization independence we've spoken about. This ability for them to, for students and educators to expand existing educational materials and curriculum in multiple ways and couple that with what I'm seeing from certainly some students with disability and something you touched on, Bryan, and certainly that prompt engineering, you know, ask it what you want, and it's, it's something that a lot of students with disability have been good at doing for a long time, I need this, I need this to help me study, I need this to help me with my learning. I need this. You know, we've got access plans, learning access plans equivalent kind things for accommodations. So I often hear students asking again for that, oh, but could it do this? And you go, well, ask it. They've already got all these ideas in their head. Could it assist me with this? Could it assist with this? Could it assist with this? You know, and once you've opened those floodgates and showed them how to use some of the tools and some of the prompts, they're off. You know, they're, they're, here's this tool that can do amazing things that I just did not have before without support, and I suppose the question that I've with that when often that previous support relied on, you know, human resources and they were largely time intensive and labor intensive, you know, are you seeing a shift in any existing support structures into ones that actually foster and build that learner agency for students?

Bryan: Well, you know, Darren, I think it's actually like, I think that first part, foster learner agency, I think that's the critical part because to be honest with you, learners have always had agency, right? If any asset based andrological or pedagogical system existed that made students learn, look, and if I had the answer to that, like, you guys would be paying big money just to enter the arena that I was in. You know what I'm saying? Like, so there's not anything that's going to make a student learn, right? Or is, and, and there's nothing that's going to say, I've conned you into learning. Learner agency has always existed. What AI does and what it can do is, is foster that learner agency to build the environments and the on-ramps and that's like, that's the basis of UDL, right? In UDL, we're not, we're not affecting the learning because students learn on their own. It's like one of those things that is theirs inherently, every human's inherent thing is that they can learn, right? And they can choose what they what it is, what it does is it builds scaffolds, opportunities, and it builds on ramps, and it builds great environments to explore that. And AI helps with that, right? And the way that it helps with that is numerous, but one of the quick ways it does is that it allows students to be able to access it when they need it, right? And not to have to tell anybody or to sit at the table and ask anybody. Right? Like we, we no longer at the point … Real empowerment is not sitting at the table and asking if I can have some more food, real empowerment is eating what I need to eat, right? And then real wisdom in that empowerment comes in saying, I'm going to eat what I need to eat and I want other people to eat what they need to eat as well, right? And AI has given us a means to do that and to break down that digital divide because most of the time digital divide is around devices, right? And it's around the mechanics of it or the hardware of it. Like, do we have Wi-Fi, right? But if we can, if we focus just on the idea of AI, AI makes it so that I could, I can do it from my phone, right? Which is a device that more people have than they have laptops. I can do it from, I can do it from, if you, if you have like a, like a handheld, right? Like if you have like a Steam Deck, you can do it. You can use AI from there. You can use your computer from there. The question again about digital divide comes back to hardware. Those are hardware issues. And AI allows us to reach well beyond those things, right? Not the hardware portion, but well, it's ubiquitous and it doesn't care who I am. So there's this anonymity and this recognition, this weird paradox between those two things that start to happen. Right. And I think those are powerful for us to play with. I don't know where they're going, right? But I'm excited to see where they go.

Darren: Absolutely. Thank you, Bryan. Look, Michelle, I suppose that same question, you know, are you seeing a shift in those existing support structures and ones that foster and build learner agency?

Michelle: Yeah, I think, you know, I couldn't agree more with what Bryan just said. And I would also add that because AI is literally everywhere now. And it is, I know it's a buzzword and being a buzzword is not always a great thing, but students are able to leverage their assistive technologies. I can use my Siri on my phone and ask a question and information is going to pop up. And I think students of all dynamics have the opportunity to feel okay with utilizing it. I feel like they're moving forward with understanding that, hey, this is becoming a part of our everyday lives. This is becoming a part of how we're navigating. I think that question that you asked coupled with learner agency through, you know, supports and Frameworks like Universal Design for Learning helps us as Educators be able to provide that control to the student and help them and guide them in, hey, what is it that you need at this time? How would you like to leverage this? What kind of options do you want to, you know, search for in this AI or ask this AI to do, right? It's, it's coupling with, I always think about it with my own children, and I know I keep referring back to that, but it's a way that we can relate. There got a point where, you know, one of my kiddos who received special education services made it to high school, and I started, as a mom, I started to freak out, like, oh my gosh, What's going to happen when my child doesn't have this amazing special education teacher that's going to be walking with them hand in hand every step of the way to tell them, Hey, take this note. Hey, grab your calculator. Hey, do this. As educators, as parents, as individuals, we have to empower our learners, our children, our educators to make those choices to identify through sometimes failing forward, right? Sometimes we're going to fail forward to identify what worked and what didn't work. But we have to give them the power to recognize what it is that they're able to do, what it is that they need when they need it, and how are the options that they can engage in that. And I think the same is true with AI, right? We have to continue to empower our students to make those decisions, to know what those decisions can be, what the options are there. Even if I, as an educator or as, you know, their primary point of contact, even if I don't agree with it, right? It's not my choice. It is my choice to provoke, I'm going to throw it all on the table, I'm going to make sure that they understand what it is, and they can then make those, those choices. Of course, I'll be there to guide them and support them in any way that I can, but it's ultimately up to us to help them make those decisions so that they can take control of their own learning path.

Bryan: What a crazy, like think about this crazy concept, right? Like, right now, UDL in a lot of ways, especially with students in K-12 or PK-12, right? UDL is kind of done to them even still, right? Like, it's on the instructor, it's on the facilitator to figure out what the barriers are. But now with AI as my assistant, I might go from, you know, from Mr. Smith's classroom to Mrs. Jones' classroom. And Mr. Smith is, yo, he's deep in the UDL game, right? And we're actually, like, really self-actualized and we're learning all these things, but when I go to Mrs. Jones' room, she's a direct instruction kind of teacher, and that's how she is. But there's still barriers. Now I can use AI and myself and carry it with me, with me and my device, right, to figure out what the barriers are in that class and the barriers to my own learning. You've just changed the whole landscape, you've changed the whole game, and it's not intrusive. I'm Mrs. Jones, and it's not something that has to just come from Mr. Smith. I actually carry it with me, and I actually identify it. That is, That is UDL 4 or 5.0. Right? That is the next, next thing, right? And we have it, we have the potential for that. We just have to shape that, right? And we have to grow that, and we have to shape it, and we have to move it, and, you know, we have to be responsive to it. That's a mind blower.

Elizabeth: So true. And it really makes me think about the fact that we do, like we do have a digital divide, and I think now with AI, it's not even just a digital divide. I think I've said it before. It's like an AI digital divide. So even if you've got the digital, you may not necessarily have the AI literacy or have been exposed to the AI to have developed the skills that perhaps your peers have. That makes me think, you know, considering how AI can be an enabler of learner agency, where can UDL help to address those inequities?

Michelle: So when I think about universal design for learning, and honestly, it doesn't matter which version I'm thinking about, I'm just thinking about the mindset of being proactive and identifying barriers and designing better so that I can remove those barriers. And I think about AI as a resource, right? It's an assistive resource. So, as I am providing that learner agency, as I am proactively designing for those barriers, knowing that that digital divide is going to be that that AI divide even. So I need to make sure that one, as an educator, I'm providing opportunities for students to have the option to leverage that in the environment that they do have access to it. So if it's in my classroom, I'm going to make sure that I make that available. But I'm also going to really think through how I'm going to be providing instruction, how am I going to allow them to perceive the same type of support that they might get if they don't have that technology access within AI. And so, I mean, AI is one tool. It is one resource. It is an amazing resource. It's an assistive resource. But there's also other ways that we can build learner agency by leveraging universal design for learning, right? By proactively thinking about options that are purposeful, that are relevant, that have a purpose behind them that are specific to removing barriers and supporting learners, that we can constantly bring into our learning, even with absolutely no technology, right? So when we're thinking about just pencil, paper, class discussions, environment, and how we're designing our class setup, like all of these things support learner agency if we're designing with intention. Bryan, what do you think?

Bryan: I think you back at it. I, to be honest, yo, like we are teachers foremost, we're educators foremost, first and foremost, right? And sometimes you're gonna have to kick it old school. And that's okay, 'cause you're gonna have to dig into that bench that you've built, which should be wide and it should be, and it is all of those things that you have accumulated over the years, right? There is no unitasker in education. There just isn't, right? And it just doesn't make sense that there would be. And so AI is not the solution to every issue. and to be honest with you, the digital divide that we have, as I said before, AI is an accelerator of that digital divide. It also highlights that we have that digital divide, but that divide has existed in many different forms, whether that divide is access, whether that divide is technology, whether that divide is curriculum or funding, that divide exists. And to be honest, it's hard to solve that problem, and it's hard, I wish I had the solution again, and you could come to my follow-up arena tour if I had the solution to the digital divide, right? Like there are things that we can do to make that more equitable. However, that divide keeps existing and the more that we empower students and the more that we build structures that accept that empowerment of students, right, and change the view of who's in control and what control is, the more we'll move towards, I'm like fundamentally changing that, that divide, right? 

But sometimes you are going to have to rely on those old school skills that you have. And guess what? That's great. You should have them. The question about UDL is never, is this a UDL technique or is this a UDL strategy? It's more like, have I, like I do this thing and I do it really well. Am I doing it intentionally? Like am I, or is it just something that comes to me, right? If I do it really intentionally in my really great lessons, am I doing it in the lessons that are not that I'm struggling right? Or is this strategy work? Do I know how it works? You know, in a situation which is perfect or situation that is, is, is poor, that's that old school. And, and that's why AI is not going to replace any teachers, right? Because you have to be able to innovate in the moment. That is part of our craft. That is what makes us artisans, right? Is that we can innovate in the moment and we can switch them. And AI’s gonna have a hard time doing that.

Darren: Look, I love that response, Bryan. And it's so true and probably leads directly into this next question. How important are those communities of practice and opportunities to share and compare individual UDL practice? And do you have any like resources or ideas that you could suggest for people wanting to know more and to leverage other people's knowledge?

Michelle: Yes, the best part, right? Really thinking about how we can meet each other where we are and learn and grow from there. And so I would say, yes, there's lots of resources out there. Really, really proud to say that CAST has a couple of things in the mix. And Bryan, you can probably talk better about the AI mini course that's being developed, but there's also some articles that have been written, not only with CAST members, but with partners from the field on, you know, AI. And, you know, there's one in particular I'm thinking about. It's titled Five Ways AI and UDL Work work better together, and it just gives some really great information, great ideas, great aha moments that really anybody can take just a little piece of it and digest with whatever amount of time that you might have. But I would also say communities of practice, y'all, we are a network. We offer a CAST cafe where we, Elizabeth is always there every single month, and great way to learn about new resources and tools. And so if you're interested in that, we definitely invite you to visit the CAST website at cast.org and it's free, right? It's free, my favorite word in the world, and you can join us just to have some good old conversations to share these ideas together. But I'm really excited to pass it over to Bryan. Bryan, why don't you talk a little bit about the AI mini course that's being developed?

Bryan: No, I refuse to. I don't want to talk about it. No, I'm kidding. That's just me being oppositionally defined, I guess. Yeah, we're building out a mini course. We're building out actually a whole suite of AI and UDL together. So we have, we're building out in-person trainings. We're building out a full to yet to be determined, yet to be determined number of modules, but full length, extensive course that looks at UDL fundamentally and AI fundamentally and how those two are bridged together and even like how the UDL principles map onto a prompt system or how we use, should we use large language models or should we use small language models in determining things. So we're building that out, but the one that we will have out probably first and foremost is our mini course, which goes over basic, you know, some basic AI stuff, but then gets down deep into it and says, here's some really practical strategies that you can use in your classroom to build AI literacy, to build, to build more effective systems, you know, using and employing AI, to how to, how to evaluate AI outputs from students, with a UDL lens, with an agency lens, with a bias lens, all of those different pieces. It's a little two module, two module system. It's about three hours because we want it to be, we want it to be strong. We wanted to have some, some basis to it. But we also know that everybody's life is busy, right? So if you, if you do an hour a week, you know, I don't want to sound like an Instagram ad or social media ad, but it will transform, you know, transform your AI use and, and give you some AI Consciousness. 

So, yeah, those are a couple, a couple of them. But there are tons of communities of practice out there. We at CAST are building our Mighty Network system, we have a UDL community and there's lots of different, like small groups within that community. It's all done through Mighty Networks. But there are other ones, like even like future, you know. One I was just checking out the other day that I thought was dope was future learning. Right? Like future learnings, community practice is dope. It's a bunch of practitioners getting together and talking about AI usage. If you are looking for your own community practice, start one, because there's lots of people, right? Whether it be in your own department, if you're in higher ed or you're in secondary or your grade level or whatever it is, there is a place to start it. And the conversation can be just real simple, like, what is this thing? what do we know and where do we want to get? And like how I would caution you to say always, what's the human side of this? Not just how does it help me make better lesson plans, right? But what is the human side of this? And what is the human capital that I need to put into this? And what's at risk there? Or what's not at risk there? What's the benefit there? 

We also have a couple papers, like Michelle said, and we have some resources, some other resources we're going to build for teachers, top 10 lists, things like that. that that really embodies some UDL within them as well.

Elizabeth: So much to explore and we'll be sure that we put links to some of those into the show notes so you can go and explore it yourself. I'd also love to give a quick shout out to some really fantastic work that Michelle Soriano and Kelly Suting who's not joining us today but hopefully may do in the future. They've done some really fantastic quick learns on accessibility. So if you still feel like you're trying to get your mind around how do I increase accessibility? Go and check out their quick learns. They are absolutely fantastic. But returning to Bryan's point about the human in the discussion and that human element, something we really like to bring to this podcast to think about even in this AI world, where does the human fit? So just as a final question, I'd love to know what sort of safeguards do we need when we're working with AI or having AI integrated into systems? to make sure that we're not getting those reinforcing biases, whether it's in assessment or content delivery. How can we reduce those biases and how can UDL principles really help us to use AI ethically?

Bryan: I know that I'm going to sound like a broken record. Literacy, literacy, literacy, literacy, right? Like we missed the boat in so many different ways when it came to digital citizenship. we missed it and we wrote our literacy a little too late and, and everything sped up. Now we've, we've come back to that and we've, whether it be, you know, the six C's or whatever literacy we, we do around computer science, around understanding, but, but we missed the boat in the beginning of really kind of forging where it could have gone. And then we had to come back, and now we're, now we're back in that position of forging. So literacy is huge to me. I think that also reflection, like I can't, I don't, I don't want to tell you specific policies. Your policies in your learning institution are yours or in your state are yours, and they fit your people the best way. But reflective reflection and, and reflective use of AI and reflection on how we're instituting it, that's those two, those things to me, the most important safeguards moving I love that.

Michelle: And I would just add to that, you know, co-design with diverse voices and co-reflect with diverse lenses. Right. So thinking about, because there's going to be biases, we just want to bring them all to the table. We want to make sure that they're all kind of laid out and we're seeing different perspectives on things and, you know, keeping that human in the loop where it's, you have AI, but maybe every now and then there's a human in the loop that's going to review it and it's going to talk about the biases that maybe you're bringing to the table. And of course, those who are brilliant enough to design AI, not me, right? I'm a user of AI, but when you're thinking about the data behind it, making sure that you're bringing diverse data pieces to the table as well, right? So I think it's just thinking about intentionality with how you're co-designing and how you're reflecting and how you're reviewing and then train, train, train, train, train, right? If we can train, everybody, I'm learning something on the daily and I welcome it, right? And I'm going to continuously learn because it's going to continue to evolve. So, yeah, I think those are some of the things that we can just keep at the forefront.

Bryan: I love that, Michelle. Yo, again, you just blow my mind all the time. That's why I thank you so much. Like it is not just who, it's not just who are we thinking about, in the AI equation and their perspective, but it's also who's writing the AI, right? And who's b uilding it, right? Like it's not just our stories, but who's telling the story and how are they telling the story, right? And are we offering different and diverse voices into that, right? That's how we start to minimize more and more of that bias, is we include more and more, you know, more and more voices. That's, and I think that, Michelle, I think that's what you, that you hit on the head there. I love it.

Elizabeth: Just to summarize, I think you've just really set us a beautiful path for the future. Reflection and co-design as ways to really help us make the best use of AI in an accessible and UDL way. Reflection co-design, keywords for the day.

Darren: Absolutely. I couldn't second that more. Look, I'll just quickly touch on a recent session that I was at and the topic of certainly of writing those prompts and somebody asked, somebody senior, why did you put please and thank you into your prompts when all this does is waste tokens and help destroy the environment quicker and it doesn't change anything? And the response was, I'd like to practice what I'd see out there in the world. I don't want to lose the ability to say please and thank you with things that the humans we should be creating and not losing in this system. And, you know, that was probably the biggest takeaway for me from that. And just to hear somebody high up in an institution say, regardless of whether this is taking tokens, we shouldn't lose that little bit of humanity that we've got, and we should practice it, and we should demonstrate it, we should teach it. And it's like, absolutely. 

So, look, which is exactly what you've given us today. You've both spent some time with us to talk through, you know, a very complex and challenging area for a lot of people, but we can clearly see how all of these things relate to hopefully a better future for educators and for students. And these tools are here to, you know, help us assist along with the work that certainly CAST does and the UDL guidelines that's there. So I'd just like to say thank you to both of you for joining us today on ILOTA Things. Really do appreciate it.

Bryan: Thank you so much. This was a great time. What are you guys doing next week? You guys want to chat some more? I love it.

Michelle: Let's schedule it. This was awesome. Thank you so much. Help me in.

Elizabeth: And I think, yeah, everyone engaging with this podcast would be cheering going, Yes, please. We would love another version of this to happen every week. And I think just a huge thank you to cast overall for what they've done bringing UDL to the world. And a huge thank you to Michelle, to Bryan, and also Kelly, who I mentioned with the quick learns. For three of you individually, what you bring to the world in supporting educators to really make these spaces as supportive for students as possible, those environments where students can thrive, your impact is incredible. So thank you so much for all of the energy that you bring and especially what you brought to us today. There is so much more that we could discuss, unfortunately for this episode, that is our time for today. I could keep going for the rest of the day, but I probably need to let you have a rest. So for anyone who is engaging with this podcast, please know you can get in touch with us at any time through feedback@ilotathings.com or you can visit the website www.adcet.edu.au/ilotathings for more details and we will have links to those different resources that Michelle and Bryan have mentioned as well. Thank you so much for listening, and we hope you can join us next episode as we continue to explore ILOTA Things. Till then, take care and keep on learning. And as CAST says….

Bryan and Michelle: Until learning has no limits.

Darren: Bye.

Elizabeth: Bye.

Bryan: Later.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.