ADCET

ILOTA Things: Episode 11 - Responsible Use of AI - Who's Responsible?

Darren Britten, Elizabeth Hitches, Joe Houghton Season 1 Episode 11

Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this episode, titled Responsible Use of AI - Who's Responsible?, we're going to take a look at the key questions around accountability and responsibility when it comes to teaching, using and implementing AI.

More information including episode notes and links are available on the ADCET website.

Announcer: Welcome to ILOTA Things, the ADCET podcast where we explore Inclusive Learning Opportunities through AI. In this series, we'll explore the exciting convergence of universal design for learning, UDL, artificial intelligence, AI, and accessibility, and examine ways in which we can utilise emerging technologies to enhance learning opportunities for educational designers, educators, and students. Now, here are your hosts, Darren, Elizabeth, and Joe. 

Darren: Hello and welcome from whenever, wherever and however you're joining us and thank you for your time once again as we investigate ILOTA Things, that is Inclusive Learning Opportunities Through AI. My name is Darren Britten and joining me once again on our Artificial intelligence, Universal design and accessibility Magic Carpet Ride are my co-hosts, Joe Houghton,

Joe: Hi from Dublin.

Darren: and Elizabeth Hitches,

Elizabeth: Hi there.

Darren: Today's episode is titled Responsible Use of AI - Who's Responsible?, where we're going to take a look at the role of AI in education and how it impacts both staff and students. Now, as AI powered tools become more common in the classroom for a wide variety of uses, from assisting students with their learning and helping staff with their curriculum, and even grading, through to enterprise and administrative systems, questions around accountability, responsibility and ethics are more important than ever. So, who is responsible when AI driven systems make mistakes such as bias grading or incorrect recommendations or information? How do we ensure that AI supports educators rather than replaces them? What safeguards are needed to protect student data and privacy and institutional IP? How are we instructing students on how to use AI responsibly? Are we excluding students that could benefit from using AI? Who has access, who does not? The list goes on. So how do we build this environment so we can collectively work with AI and not have AI do the work for us? Now, this is a big, big question and it's very topical, but I'll kick this off by throwing to you, Elizabeth, because I know this is an area that you've been looking into. So, I'm going to make you the responsible one here to get us started.

Elizabeth: Thanks so much, Darren. So I'm going to start off thinking about the reasonable use of AI by our students because I know that academic integrity is still very much front of mind for many institutions, many educators and even many students. So I know we've had some students who have been called into meetings and been accused of using AI when they haven't, or accused of using it irresponsibly when they haven't. And more and more policies are coming into place around how students should use it, or, yeah, even having a lack of policy around ethical use. So there are still many gaps, many things to consider, but something really interesting that I came across was a study that was published in 2024, so this is a year old. There could be much more literature in this space since. But this particular study was asking is it harmful or helpful examining the causes and consequences of generative AI usage among university students? Now, the reason I think this is a really interesting study is because these particular researchers were looking at the literature and realised that we actually don't have a lot of research exploring how students are using AI and also what the impact is for them academically and personally. So they went out to investigate that and I think that's really fascinating. Now what they found is that when students have a higher workload or when they have more time pressure, students were more likely to use generative AI for their academic tasks. So more, more likely to use it if they were feeling overburdened and really trying to cope with that workload. Now, interestingly, on the opposite side, those students who didn't use AI or were less likely to use it, these students were found to be more sensitive about their grades, so more afraid that using AI might potentially impact their grades. Now, in terms of the impact of that AI use, what they found is that those students who were using it excessively, so those students who are experiencing high academic workload, they're experiencing that time pressure, they've got really excessive use of that AI, these students were actually seen to be more likely to procrastinate, more likely to report memory loss, and that stress with that academic workload and time pressure that was prompting that AI use was then also seen to impact their academic performance. So I think what this really shows us is what Dr. Tom Tobin mentioned in a previous podcast episode about really thinking about the responsible use of AI. AI in place of the student is not a good use case. So students using AI excessively just to cope with workloads and time pressure, not going to be the great use case for AI, but instead AI complementing the student in the learning journey, that can be a very positive use case. 

So that really kicks off the question today of how do we encourage that responsible use of AI by our students? There can be so many benefits. How do we make sure that we're getting those benefits and not getting some of those negative impacts? Now, a part of it could be considering those manageable workloads for students. So what are we doing in terms of our learning design? We have this really great aspect of our UDL 3.0 guidelines, that engagement principle and consideration 8.2 specifically says we want to be considering how we optimise those challenges and how we optimise that support. So how are we giving those meaningful, appropriate challenges in learning and not pushing them to that extent where they are overburdened, time pressure, time poor and having to resort to other ways to get through that learning. We can also think about how we support some of those skills for students in terms of workload and time management. So we have the action expression principle and we have guidelines around supporting strategy development. There are various different aspects to that strategy development. We can reflect on those and think about how we develop students skillset in that space. 

The other thing that really stood out to me, and I'm going to use a direct quote in a moment, what these researchers found is that policymakers and educators should design curricula and teaching strategies that engage students’ natural curiosity and passion for learning. So they're saying as a result of these findings, this is what we need to be doing. And to me that sounds so much like the good old nurture, joy and play that is the new addition to our UDL 3.0 guidelines. So perhaps thinking about that reasonable use, perhaps from this study we have three really key things and tangible things we can consider. So how do we be designed to have manageable workloads for students? How do we support students to manage that time and workload? And also how do we build that joy and play into the learning so that the learning itself is meaningful and we don't have students just jumping to AI in a negative use case to do that work in place of them rather than as a compliment. So over to you Joe because I know you've had many cohorts of students now thinking about AI use in that classroom space. So I think it'll be great to hear your perspective on this.

Joe: And it's funny because earlier on this week I had a new cohort of students and you know, as I do in the first couple of sessions of any module with a new cohort now, I do a module on AI, a session on AI. So we did two and a half hours on AI, showing them the tools, talking about ethical use of AI, you know, linking, plagiarism and you know, academic assessment and the fact that they still need to cite. And I showed them, you know, and I don't know that there's as a standard at the moment. But I mean I at the moment I'm telling my students treat something that comes out of ChatGPT or Claude as if it was a paper, and if you want to copy a few lines into your piece, that's absolutely fine, but then put a reference in and say, you know, ChatGPT Model 30 yeah, accessed date and then the prompt. And I think that gives you a reasonable kind of reference back. And the student is showing that they've used AI. And increasingly now what I'm also doing is I'm asking them to italicise any AI kind of quoted material because Google and Apple, as they build AI into their tool sets and their kind of user interfaces. Now this seems to be an emerging standard, that AI text is shown in italics. So I've adopted that with my students as well. But I'm going to come at the kind of topic area that we're discussing today from a slightly different perspective. 

So what you're talking about Elizabeth, is student centered. You know, it's all about the student, it's all about how does the student think about AI, how do we communicate it, you know, AI responsibility and all the rest of it to the student. I'm just about next week to start teaching a new module that I've never taught before. So I've been preparing quite a bit of material and reading around material on responsible use of AI. And it's a seven session, two hour per session module on AI. So I've been kind of thinking about this, but kind of at a more macro level. In Europe where I'm based in Ireland, we're in Europe, obviously, there is a thing called the EU AI act which is restricting to some extent the use of AI in certain areas and education is seen as a high risk area, you know, because the, the implications of the effect of AI on students, whether those are post grad right down to, you know, kindergarten, is potentially harmful if it, if it's giving the wrong answers, if it's, you know, stopping them learning creative and critical analysis and all the rest of it. So I, I think on a more macro level kind of how do we, on an institutional level one, but also then, you know, at the staff level, at the level of the staff in the school or whatever, how do we kind of communicate the fundamentals of responsible use of AI? The EU act takes a risk based approach to this. So it says there's unacceptable risk from AI, there's high risk, there's limited risk and there's minimal risk. And if we look at Leon Furze's kind of AI, he's done some work with, with Mike Perkins, it started out as a traffic light system. Now I think it's a five stage kind of system of looking at AI in assessment and stuff like that. And it's kind of this similar thing, you know, kind of where's AI quite dangerous and where have we got to be really careful with it, when can we use it and at what level can we use it? 

The EU AI act now mandates that every school, every educational institution has to have trained its staff in the use of AI under four headings, one of which is Ethical Use of AI. And this is, this is like you have to do this. Yeah. Now they've given, they've given schools about six months, so everybody should have trained all their stuff by now. Nobody has, but they're going to start imposing penalties on the 25th of August. So we've got one semester and the summer break and then they'll start imposing penalties. So from a high level, you know, outward perspective, certainly in Europe, what we're seeing is a focus on this, a focus that this is important, that we have to move the dialogue, that just inertia and hoping that individual teachers and staff will figure this out by themselves is not good enough. And I think that's a positive thing and it'll be interesting to see whether it kind of goes any further than Europe. So, Elizabeth, you've got your hand up. What's that sparked?

Elizabeth: I actually have a paper in front of me right now and it's called A Meta Systematic Review of Artificial Intelligence in Higher Education, a call for increased ethics, collaboration and rigour. And what you just mentioned about the need for upskilling, those time frames for upskilling and trying to really develop that knowledge base is so interesting when I look at the results of this particular study. So for anyone who hasn't come across a systematic review before, that's where you look at particular search terms, you look at a particular time frame and you review every single article that comes up in that search. You know, some of it's quite brief title and abstract, is it relevant, is it not? Very large, very rigorous process. This is a meta systematic review. So it's taken all of those reviews and then done a review of those reviews. So you can imagine there are hundreds of studies included in this, so it's a really great overview of the research base. 

So what I found really interesting is that this particular study has identified particular gaps in our research. And if we think about that need for evidence based practice, evidence based action, they're pointing to particular gaps in the research around the ethics of AI use in higher education. Also looking at the methodology of those particular studies and also looking at contextual considerations and some areas where particular context there's very little research. So when we think about upskilling people around the ethics of AI, you can imagine how many gaps there might be to fill if even the research has these huge gaps that have been identified. So, yeah, there's a lot of work to be done in this space, so we're going to keep watching that space and I'll drop that other paper into the show notes as well if anyone wants to take a deeper dive.

Joe: Yeah, well, I'd certainly like to look at it as well because it'll probably feed into what I'm talking about and I mean, it's amazing. I mean, as I've been doing my research, you know, the amount of stuff that's been published even in the last two or three months by, you know, the European Commission and various EU bodies and stuff like that around all this stuff, UNESCO people like this. So there's a lot of material out there and I'll, I'll send Darren a few of the, the links to those things to put in the show notes as well. I mean, breaking down this, this topic of kind of responsibility in AI, I mean, what I've done, I've, I've kind of structured, you know, sessions two through seven into, into kind of topic areas. So in the, in the first half of the session we're going to look at a tool. We're going to look at, you know, kind of like, you know, whether it's Gemini or whether it's ChatGPT or whatever. So nice, practical, hands on, kind of go play with this tool. Let's talk about the tool, let's talk about what does the tool do with data? You know, does it automatically share the data into the training set like ChatGPT does unless you turn things off? Yeah. Or is it like Claude, it's kind of been built a little more ethically and it's not going to share your data. Where's the data stored? Yeah, OpenAI give companies and educational institutions the option to have all the data warehoused in Europe and that's obviously looking at the EU AI act and stuff like that. And I suspect we'll get, you know, the same thing for the U.S. given that all, all that's happening over there. 

So fairness and non discrimination we're going to look at as one session. Transparency and explainability would be a second session. Privacy and data protection, accountability and responsibility and then governance and bias and these are big terms, you know, and I mean, you know, I can see, you know, from perhaps the meta analysis as well, you know, this, this might seem a little bit like, okay, this is, this is a bit beyond me as an individual teacher or as an individual student. But, you know, as a teacher and as an institution of learning and education, we have to think about these things. And I'm on, you know, my university's AI steering group and we're looking at this stuff, and when you're making decisions about the tools that you're going to use, when you're making decisions about the infrastructure that you're going to use, when you're making decisions about how we're going to train our teachers and then perhaps train our students, you know, what messages are we sending about things like bias, about things like data protection and stuff like that. You know, why have we chosen this tool? Because it has better data protection, because it doesn't share data, because, you know, whatever. So these are things that have got to be kind of unpacked and explained. But it's interesting because, I mean, I'm looking at that structure of the sessions and then I'm going back to where's the joy? So I've got these, these two hour sessions to do, you know, the first hour will be fun, it'll be, it'll be working in tools. The second hour will be the students going off and researching each of those topics that I've just discussed. But how do I bring the joy into that? And that's one of my questions for you two. How do we make this exciting for the students? How do we get them kind of, you know, engaged in this stuff rather than it being dry and boring?

Darren: That's a really good point that you bring up with that, Joe, and that is, you know, joy can also be found in discoverability as well, in learning something new, in challenging yourself and going, oh, I didn't know it could do that. You know, I'm still astounded, and I continue to be working with students with disability, you know, and showing them some of the AI tools that consist with their learning and preparation for their material. Just the wall, the barriers that can come down by them using some AI and their ability to rephrase, restructure. We've been over these things before and just this, oh my God, I can engage with this. I heard from a student just a couple of weeks ago and we were going through Google NotebookLM now has got that Talkback feature, et cetera, that's in there, so you can interrupt the virtual deep dive and have a discussion with. And it was turning the notes, the transcript of the lecture, which spends a lot of time then editing, putting into conversational bits, because a lecture being delivered one way, very linear, isn't necessarily like that. It's fragmented, doesn't have good grammar, et cetera. And then having that turned into a conversation was like just completely lit up into, oh my God, it's people talking. It's not me rehearing what the lecturer said, the way the lecturer said it. It's turned it into a conversation, into a Q and A, into a back and forth that gives me a different angle. So that discovery, and I've mentioned that before from some students, the joy just in that I didn't know it could do this and this is a game changer for me, i'm engaging in a different way with the content, so we've spoken of that before. There's more than one way. 

While on the student topic, that fear of AI and academic integrity, I hear it probably more so from students with a disability. What if they get called on academic integrity and the fact that AI and assistive technologies, there's that dilemma there, where does that sit? How do we ensure students who use assistive technologies as part of their reasonable adjustments are not denied access to the assistive tech because it's got AI in it? And we're seeing that across the spectrum here in Australia as well. Had some students denied technology they'd used before because it's now got AI built in, even though it's doing nothing in the sense of academic integrity.

Joe: Do you think that's going to change, Darren, with what we've seen over the last month or two? I mean, Microsoft is kind of changing its plans and Copilot is just going to be built in now as these things become pervasive into our toolset, once, you know, Apple Intelligence is now appearing on my phone with the latest release and stuff like that. So we've got AI now, whether we like it or not, is in there and you'll be using it whether you know it or not, to a large extent. So do you think that's going to pervade through to, you know, a lot of the accessibility stuff that previously has been, you know, I've got to go and get a tool to do this.

Darren: Look, it's coming into, you know, many of the learning management systems, into many of the other teaching tools that we've got that's doing those things. But this is again, one of those situations, or most of this is situation where the technology is moving faster than policy. Institutions are very slow to get that policy out there. You know, universities have been really good at implementing assistive technologies and reasonable adjustments. Screen reader, speech to text tools, alternate formatting, but now, because all of these are integrating AI, it's all getting thrown into the too messy, too hard. We're waiting to see if there's policy, we're waiting to see direction from government or from larger teaching organisations or, you know, centres, et cetera, to give us some direction, you know, and you were talking about giving again that time for staff to play and learn these new technologies, which is something we haven't done very well in Australia anyway. Teaching staff, how to use even the teaching and learning technologies we already have. It's. There it is, go play. It's pretty straightforward and you're like, but a lot of this isn't when we're considering the ethical or responsible usage of these things. And how do we get others to use it that way, when they haven't been taught how to use it in these ways either? 

The other concern that I have, many of the AI tools, not only from an access point of view and pricing, there's a, you know, who's responsible there, you know, which tools do we recommend to students? If it's outside of price range or affordability or even access or is it even accessible? Many of the tools aren't accessible to screen reader technologies that don't meet current web standards because of the rapid speed of development. I mean, certainly Microsoft and even and some of those are doing some good job in that space, but a lot of tools certainly aren't. They're rapidly getting developed by little software firms over here that are all jumping in, trying to cash in on that. Look, it's got added AI, it's got these things in there and that can be daunting again for many neurodivergent students, that there's just so many options, you know, so who's telling them which are the good options, which are the ones we support, as you were saying, as an institution, which are the ones that we can help you with? 

But you touched on a really important one and that's that giving, you know, teaching staff the time and space to learn AI. Is six months enough in the, in the EU act kind of space that's there. Have we ever been able to get staff to turn around culturally and all of that in your six months time frame? They're big beast universities in order to change the way that they function, along with workloads that are already pretty much stretched. How do we add new things in there that we're expecting? And let's be honest, many educators are just as confused about AI as the students are. They're expected to enforce new policies on AI, but again, they haven't been given the time or the training to really actually understand what they have to enforce. If we want staff to support students effectively, then they will need that hands on experience with these tools as well. And also in the context of accessibility. So we're showing them being used in the ways that they can create accessible content. Many of these tools can also exclude that ability as well, saying, well, this tool is just going to help me colour code everything and do this, but I'm not considering colour contrast or I'm not considering all of these things. I've got a hundred more questions there, but Joe, while we're on that, how do we get teaching staff to demonstrate the responsible use of AI? You know, is it, is it a carrot stick? Is it a give them time, give them space.

Joe: I think there's going to have to be a bit of a carrot stick. More stick than carrot, to be honest. In the short term, you know, I mean the EU AI act is mandating, right? We have to train our staff. So that's going to be a push. That's a push, that's a stick saying, you know, this has got to happen, otherwise we're going to impose sanctions on you and penalties and all the rest of it. And I think that's necessary to overcome the institutional inertia. You know, this isn't something you can ignore. Like GDPR a few years ago wasn't something that anybody can ignore. We have to do it. So I think that's a good thing because I think this is one of those disruptive technologies that won't wait if you like. So we've got to do that now in, in practice. How does it work? Well, I mean I'm already on a weekly basis now training educational institutions, groups of educators, companies on AI and what it started, what that that type of training started as just showing them what's ChatGPT do? What does Perplexity do? Whatever. And we're moving now. I, I, I one of my, one of my newsletter blog post from wow to how yeah, the last couple of years has been wow, We've seen these AIs and it's been oh my goodness, look what it can do. Oh wow. And now we've had that kind of. So the wow factor is wearing off to some extent. Although I keep getting wows when they bring out new stuff. I'm such a nerd. But what a lot of institutions and companies are moving to now is okay, we've seen the. Wow. Yeah, this is amazing. How do I actually make this work? How do we use this stuff to get more sales, to educate our students better, to introduce more efficiencies or whatever. So those are the questions now and what you're starting to see and what we're going to see a lot of in 2025 and 2026 is the tools now becoming more polished and able to do more things. This use of agents, the deep research tools that are coming out now are going to be amazing for education and for, for research in particular. But it's, I think we're moving to, to this space. But personal demonstration. There's nothing better than, you know, to some extent CBT, computer based training is okay, but it seems to be if you can sit on a Zoom call or in a room with somebody who's showing you the tool and then you get to go play with it and you ask questions while you're doing it and then you have a discussion in the room, that seems to be the best way that I can see. But that's time intensive because it means the trainer has to be there, you know, for those hours. Like so you can do some of it remotely kind of, you know, record a session. But that's passive consumption. I think that the interaction and the, I don't understand what, what this bit does or I'm blocked here or whatever. That is really, really important. You know, good old, good, good old fashioned teaching and training is, I think what's important. If we don't do this, our teachers are going to turn into stress students. And I mean, that's your field, Elizabeth, you know, so how do we, how do we deal with that? Because we've, we've got our student population, but we've got our teacher population, our educator population as well, and, and they're both students in this area, aren't they?

Elizabeth: Oh true. Something that's always really interested me is that there's an inverse relationship between stress and self efficacy. So self efficacy is like your, your confidence or your self-belief in your ability to accomplish a particular thing. So, if I was really stressed about my capabilities with AI, you probably expect me now I haven't got research to back this particular note up, but you probably expect me to have lower levels of confidence in my ability to use it. Those who had higher levels of confidence, you'd expect them to be less stressed. So it makes me think maybe there's something in that typical stress, self efficacy, inverse relationship that we can leverage. How do we create that kind of training that builds people's confidence and makes that learning journey much less overwhelming? Bring some of that joy back.

Joe: You've got to put the tools in front of them and maybe show them. Right. Here's how you use it. Now go play and then ask questions as you play kind of thing.

Elizabeth: And do that work to build confidence. And I think you're right in that, hands on experience and seeing others doing this, because I think if you watch someone else use AI in a really effective way, then you're more likely to see it and think, okay, well if they can do it that way, then okay, now I can see what it looks like for me to use it that way. I think it's very hard to do it if you haven't. Yeah. If you haven't seen it.

Joe: Teachers teaching other teachers and kind of almost, you know, kind of one teaches four and then those four teach another four each. 

Elizabeth: Pyramid scheme. 

Joe: Yeah, the kind of pyramid scheme, 

Elizabeth: A positive pyramid scheme. 

Joe: Yeah. But I think that teachers teaching each other, you know, there's lower barriers because we're all teachers and we're doing the same kind of thing. So I can say, here's how I put a lesson plan together or whatever using AI and then. Oh yeah, right, okay. So I can see that. And those connections are made. A lot of institutions are dozens, hundreds, even thousands of educators. And it's kind of, you know, very difficult for one or two people to do that, isn't it? You know, your learning team, your, you know, whatever, whether it's the assistive technology team or whatever. And I mean, we had a meeting of the steering group in my university and this was one of the questions. Where does this sit? Yeah, who's going to do this training? And that literally came up as a question in the meeting. Who owns this? Is it IT? Is it teaching and learning? Is it bit of both. And I mean, even if we've got fundamental questions like that slowing everything down, I mean, it's just, it's scary.

Darren: Well, let me take a tangent here. Their responsibility for the actual tools themselves. And we've seen lots of instances of these tools, as even Elizabeth pointed out, that calling students on false positives or asking them to explain or we saw you move off camera during your exam, you must be cheating. And AI has gone down there with that human even sitting there. Some institutions are really keen to quickly adopt those tools to help with some of that workload, but there's also all the tools that we don't see the AI that we don't see. You know, what about the AI that we interact with that's just built into those systems? Who's responsible for those? You know, if we build an agent or we engage, you know, an AI agent for students or staff to interact with, who's responsible for that agent? Who can students or staff contact if they have a query or question and that information's wrong? And if it was the staff member giving them bad advice, there's a channel for that to go. If the advice was completely bad and the student acted on that advice and they go, well, we've got a record of that advice. And who told you that? Well, the online chatbot did. Oh, well, that's just the chatbot. So, AI isn't just the chatbots and auto captioning and that, that's happening though. There's, you know, AI driven systems that are working in the background of universities across the board. As you said, Microsoft's just implementing this opt out. There'll be a whole range of things where, you know, the data's getting used, our payroll systems, our HR systems, all of these will be using AI in some way. There's AI that's there deciding admissions, flagging plagiarism, as we mentioned, it's predicting which students might struggle academically, it's making calls on student data that's there when they submitted things. Oh, we're flagging this student to need some intervention or for somebody to reach out. Who's responsible when this goes wrong. And there's a lot more examples of this going wrong than not. We can't just hand it over. So if a student gets incorrect information or it judges somebody for something they didn't do, etc. Who's accountable for that? You know, we need accountability in place, and I think before these things get way too embedded, I don't think we've done enough in that space, we've been focused on the academic integrity side of things. But if it doesn't work, what are we doing about it then?

Joe: And this is, this is a completely different audience to what we've been talking about in the podcast, which is like the basics, you know, the students at the bottom end, if you like, of the educational pyramid, if you like, the recipients of the institutional process. What you're talking about is, is the decision makers, isn't it? And kind of the decisions that they're making at quite a high level. And that's another thing that I don't think is quite being realised yet, that this stuff about responsibility and accountability and bias and all the rest of it. Those decisions are made way high up in the institution, but there seems to be very little kind of focus on right, how do we train these people to make these decisions correctly? How do we train them to even ask the right questions?

Darren: Vendors are really good at showing the potential and how this will save institutions money by using their system, their thing. And, you know, a lot of leadership is certainly made up around that. How do we make our institutions run more efficiently? How, are there any tools that can help that? But we don't have the sales pitch team coming from the staff, coming from the other angle or from the students saying, this is how this can help. 

Look, there's the usual AI, data privacy, intellectual property, and we put things in. And even for students, that responsible use of. You mentioned that as part of your lesson plan as well, Joe. So if students are putting some content in there, whose content are they putting in? Who owns that content? Where's that content going? And because it's becoming ubiquitous, it's just any information I put in. Well, hang on, you can't just. Because you've got this paper from somewhere and there's a lot of tools and I don't think that's been answered. There's a lot of tools that will take a PDF, they'll take a paper, they'll take a book chapter and they'll give you a summary of it. Well, you've now made a communication, a copy of that. That may be breach of your Copyright Act. No. And it depends on the act. It's different in different countries. If I use a paper from the US that's under a different copyright law and I'm putting it into a tool here that's keeping that data. Am I now in breach or my institution in breach because they allowed this to happen. And look, and my last kind of question that I'll throw into this, because I'm just muddying the water maybe, you know, and how much responsibility are we giving to AI? How much we prepared to hand over and say that's AI's job now that's in there. You know, there's a real temptation to let AI go and do things. They're saying to help automate things, help do auto grading, to help check cams from invigilated exams, from all of these things, and course recommendations, when it puts things together, the LMS tools that are here, this will create the practice quiz for you, but half of the questions aren't even on the topic I was talking about. But if nobody's checking these, et cetera. So look, universities are spaces for human learning and human interaction and AI can be an incredible support system, but it should never replace that critical thinking judgement that, you know, educators and students can bring to the table in that space. So, you know, I think there's still some lines to be drawn there.

Joe: Yeah, that goes back to what Elizabeth said from Tom, isn't it? You know, that AI complementing the student or the teacher in, you know, is the thing that we're looking for. It's not kind of in place of the student or the teacher. Yeah.

Darren: So back to, I suppose the original title there, and I might throw this to you, Elizabeth, who's responsible to show somebody else responsible use.

Elizabeth: Absolutely. And you think about where our systems have developed from, where our education, the foundation of our education really comes from. And that's meant to be from evidence of some form, whether it's quantitative, whether it's qualitative, some sort of evidence base that drives those recommendations and decisions. And as the research is currently showing, there are many gaps in that research field at the moment. So we haven't got that really comprehensive set of understanding and recommendations flowing to then inform policy, to then inform practice, to then inform just the normalisation about everyday ethical quality use of AI tools. And so really it stems back to getting that research underway to really have those evidence-based recommendations. 

But in terms of who is responsible, it really is multilayered. So I think even with those gaps, we know enough about ethical use at that surface level to know that we need to really consider whose work we're feeding into the system, what work we're getting out and how it's attributed to even look at how those models have even been developed, have they been developed in really ethical ways? We're getting more into thinking about sustainability and realising the impact. So we want to be using our AI appropriately because we want to make sure that carbon footprint is not unnecessarily getting larger because we've AI'd our entire day and perhaps we only needed it to complement two tasks. How do each of us have that personal responsibility and how do we do that upskilling and that confidence building so that we have that knowledge base to make those informed decisions. Really complex, which is great that we can be involved in conversations like this and upskill each other.

Joe: I got a question in the AI class on Monday from a student and he said, you know, when I get an answer back from ChatGPT, I don't know where it came from. I can't attribute it because it's a black box. You know, it just gives me the answer. And I think, you know, we've had quite a meta, high level discussion today in a lot of senses. But I mean, just to finish off with maybe a, a very practical suggestion, if you haven't yet played with some of the more newer advanced tools from ChatGPT, from Gemini, the, the deep seek tools, the research tools. Perplexity's got one now, Gemini's got one, ChatGPT's got one. Okay, these, these do a kind of a deeper dive and they will go off and they will run a series of different queries. They kind of create almost a research plan of different queries and then they link all the queries together and what you get back tends to be a fuller, more comprehensive answer. But I think in all the deep seek tools, these research tools now you also get the sources cited. And I mean the Gemini one is very good deep research because at the end it gives you, you know, cited works and there's a URL with 30 to 100 sources there. So you've got all the sources. And I think that's an improvement from an academic point of view because now we can see where that line, you know, in the report came from, and it's got a little, you know, number pointing to the source just like it should have in a normally cited academic program. So if you've not played with those tools yet, go and have a look at some of these tools because they can be a lot better when you're doing kind of, you know, research for an assignment or for a module that you're putting together or for the next paper that you're going to publish. Elizabeth, you know, whatever.

Elizabeth: Yeah, being able to look at those citations because sometimes you might see that it's referenced two pseudo scientific websites. When you've asked it about you know, what are the nutritional qualities of garlic? Just out of interest and you get some really interesting websites come up and you think, yeah, I may not trust those sources, I might find something else.

Joe: Yeah, some of the tools give you a little bit more flexibility in like where you're pointing to. So Perplexity has a little focus button and one of the options there is academic and then it uses kind of academic sources rather than just web sources. You know, ChatGPT lets you turn on or off the web search and stuff like that now. So, and you can also, in the prompt, you know, you can say just use peer reviewed journals or whatever. And that to some extent will steer the, the chatbot to, you know, perhaps open source academic papers. So there's different ways of manipulating your, your searches if you like.

Darren: One thing that certainly comes to the fore from our discussion today is, is who's responsible? Everyone. There's more questions than answers still in this space. And you know, we've, we certainly brought that up in this episode and we've only just scratched the surface. I'm confident there's a few AI maturity models out there for institutions to use and look at JISC has one that's there and there's ones for leaders to look at. Where's their institution at? But we're still all really new in this space. So we don't have that vast data set over 10, 20, 30 years that we can call back on and say, well, we know this works, as you were saying, Elizabeth, the evidence is there. We can rely on this moving forward. So there's a lot of play in this space. There's a lot of what can we find, what can we do? And AI is, you look at every conference, AI is there in some way, shape or form. Regardless of what the topic is. AI has invaded those spaces. And so how do we use AI in this discipline? How do we use it in this discipline? How do we use it? Health sciences, et cetera. So look, we'll keep asking those questions but we might, you know, wrap this to a close for today. Any last thoughts, Joe?

Joe: No, I think, you know, we just have to keep thinking about this and I mean I'm looking forward to this course because one of the things I'm going to open with is right, this is new, everybody, okay. And we're on this co-created learning journey and that goes straight back to UDL. You know, the co-creation of your learning journey is so important. So I'm going to put this in front of the students and I'm going to say, you know, I'm figuring this out as much as you are and everybody's figuring this out. Let's go figure it out together. And I think hopefully that will bring some of the joy back in that it's not a didactic kind of sage on stage approach. It's okay, let's explore these topics as a class, go to your company and see what your company's doing, you know, because this is financial services people. But yeah, so I'm looking forward to it, and I will report back as the course develops and you know, tell you what insights we're getting. Yeah, so, so I mean go and play. You know, we keep saying this at the end of every episode, we encourage you to go and explore these tools, to explore these topics and hopefully what we're discussing in the, in the episodes is just giving you a jumping off point, a thinking point to maybe go talk to your students, go talk with your colleagues, just think about this stuff, maybe, you know, read a couple of things and just inform yourself about it more. Go try. Yeah. So, you know, we will put links to the papers and the tools and, and Elizabeth's meta review and, and all the rest of it in, in the show notes along with text prompts and you know, if, if there's any images that are appropriate and all that stuff will be on the ADCET website www.adcet.edu.au/ilotathings.

Elizabeth: And of course, we would absolutely love to hear from you. We want this to be a conversation between us and you are included in that us. So please do reach out and if there are things that you're doing to really support your students to use AI reasonably, responsibly, ethically, we'd love to hear it. We'd love to hear it. And let us know if you're happy for us to share that with our listeners as well. So do reach out and if you have any questions or comments about AI, UDL, accessibility, anything that we've discussed, you can contact us via our old school email feedback@ilotathings.com.

Darren: And that's our time for this episode and I hope we've given you an insight into how you can think about the responsible use of AI for yourself, your students and your institution. So thank you all for listening and we hope you can join us for our next episode as we continue to explore ILOTA things. Until then, take care and keep on learning.

Joe: Bye for now.

Elizabeth: Bye.

Announcer: Thank you for listening to this podcast brought to you by the Australian Disability Clearinghouse on Education and Training. For further information on universal design for learning and supporting students through inclusive practices, please visit the ADCET website. ADCET is committed to the self-determination of First Nations people and acknowledge the Palawa and Pakana peoples of Lutruwita upon whose lands ADCET is hosted. We also acknowledge the traditional custodians of all the lands across Australia and globally from wherever you may be listening to this podcast and pay our deep respect to Elders past, present and emerging and recognize that education and the sharing of knowledge has taken place on traditional lands for thousands of years.