<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1005154772848053&amp;ev=PageView&amp;noscript=1">

Transcript: AI & Academic Integrity, a Students First Symposium, on Reimagining HE 🎧

Insights

Jul 3, 2024

This is Reimagining Higher Education, your go-to podcast with remarkable education leaders sharing personal stories from their experience in and around the sector, including reflection and hope for progress. With your host Dr. Noreen Golfman, former Provost and Vice-President Academic at Memorial University in St. John’s, Newfoundland, and inaugural member of Studiosity’s Academic Advisory Board in Canada. Welcome. Visit studiosity.com/studentsfirst for information on the next Students First Symposium, an open forum for faculty, staff, and academics to candidly discuss and progress the issues that matter most in higher education.

Listen on Apple podcasts Watch on Vimeo Listen on Spotify

Dr. Noreen Golfman:
Well. Welcome, everybody. To my knowledge, we have hundreds and hundreds of people attending to this webinar. I'm Noreen Golfman, and I am based in St. John's, Newfoundland. And for the many of you who don't know where that is, it's the most eastern tip of the continent of North America. And, I'm going to offer a land acknowledgement because of the situation I am in, in this gorgeous, glorious place, we respectfully acknowledge the province of Newfoundland and Labrador as the ancestral homelands of many diverse populations of Indigenous people who have contributed to over 9,000 years of history, including the Beothuk on this island of Newfoundland. And today, our province is home to diverse populations of Indigenous people. And I acknowledge, with respect, the diverse histories and cultures of the Mi'kmaq, Innu, and Inuit. And we have an hour with three terrific panelists. I'll call them experts, I'm sure they are, who are going to offer some insight. We were just saying before the webinar started that, we all acknowledge we're on the cusp of this new technology and trying to figure out just how to integrate it, adopt it, manage it in our post-secondary institution environment. So, I'm really eager to hear what people have to say. And we're going to try and make as much room as possible for questions. Many of you have submitted questions, and many of these are clustered around themes that we hope to cover. So, use the chat. I'm sure you know how to do this. And again, a big welcome to you all. Thanks so much for, for tuning in. And we could probably take hours doing this. But we have an hour, and so to get to it I'm going to turn to the panelists and ask each of them to introduce themselves to you and tell you, tell us, a little bit about, where they might be situated in relation to this very hot topic. So, Tricia, I'm going to begin with you.

Dr. Tricia Bertram Gallant:
Great. Well, hi, everybody. Thanks for coming. My name is Tricia Bertram Gallant. I am the Director of Academic Integrity and Testing at the University of California, San Diego. However, I'm Canadian, born and bred, University of Guelph, two time alumni. And actually Julia was one of my professors there in my Master's program. So I did my Bachelor's and my Master's at the University of Guelph and worked at the University of Guelph for several years before coming to the States. And it was at the University of San Diego, down the street from me, where I was introduced to that the phrase academic integrity in 2002, and was mentored by somebody so enthusiastic about it that it became my topic. And so I had been researching and writing about academic integrity since 2003, involved with the International Centre for Academic Integrity since that time, and then managed to get a job in the area. So, I have been immersed in this for 22 years at this point. And then, of course, genAI was just something that came with the territory. 

Dr. Noreen Golfman:
Okay. Terrific. Thank you. Julia. 

Dr. Julia Christensen Hughes:
Thank you. Hello, everyone. I'm Julia Christensen Hughes, and I'm the President and Vice-Chancellor of Yorkville University, and that is Canada's largest private higher education institution. I came into this role three years ago from the University of Guelph. And that's, of course, where you heard from Tricia, where we met. But years ago, I was Director of the University of Guelph's Teaching Centre, and it was at that point that I became very interested in the topic of academic integrity. And honestly, I think it's a bit funny that we have AI - academic integrity - and now we have another, AI - artificial intelligence. And of course, these worlds are are colliding. When I did a Canadian study years ago, many universities and colleges across the country contributed very few students. Less than 1% said that they submitted work done entirely by someone else, or particularly if they'd, you know, purchased a paper online and, you know, engaged with the web in some way. And now, of course, we're hearing much higher percentages of students saying they fully used AI in the submission of work. So, now we have this crazy world where students can submit a paper and receive fulsome feedback, where neither the student nor the faculty member has read the work. So, as somebody who has been committed to academic integrity and says the most important thing higher education institutions need to do is assure society about the integrity of the degrees they confer and the research results they disseminate. We are at this critical point. I'm so delighted to have been invited to contribute, and I'm just going to drop in the chat a link to a book that I recently had the pleasure of co-editing with Sarah Elaine Eaton, Sarah is certainly another expert in this domain, for anyone that might like to dig a little bit deeper into academic integrity. Thank you. 

Dr. Noreen Golfman:
Great, terrific. And over to you, Simon. 

Dr. Simon Bates:
Hello, everyone. Welcome. Real pleasure to be here. My name is Simon Bates. I'm about to return to a second term as Vice-Provost and Associate Vice-President for teaching and learning at UBC's Vancouver campus. So, between myself and Noreen, we span the entire continent today. And, I can see several, welcomes from my part of the world in, in the chat. I want to acknowledge that I'm joining you today from my home on the traditional, ancestral and unceded territory of the Katzie First Nation in the Fraser Valley. And in terms of my role, I want to stack three hats that I'm wearing today. The first is my role as Vice-Provost, Teaching and Learning, at UBC. Within that portfolio, I have responsibility for both both AIs, as Julia has just mentioned. But academic integrity in the form of the Academic Integrity Hub, which we developed post COVID taking an educated, educative approach to academic integrity with our students and faculty, and also, helping shape UBC's approach to AI, including generative AI tools and figuring out that across the landscape of teaching and learning. So, that administrative hat is the first one. The second hat I put on when I have the time is that of a researcher. My research over the last 20 years has been at the intersection of technology and learning, particularly as it applies to, to science education. And that's my disciplinary home, in physics. And then the final hat is that of an educator. I still teach and work directly with students. Earlier this year, I was teaching a small first-year seminar writing class on science, argumentation, and communication. And we, too, were grappling with how do we reconfigure our assessments in a world for generative AI? What's the guidance that we give to students and trying to navigate that process and really learning as we go, which I think is the process that Noreen and Julia have spoken to. We're all engaged in this. I'm looking forward to the conversation today.

CAN - Symposium Thumbnail-1

Dr. Noreen Golfman:
Terrific. Well, I'm going to open it up with a big question and ask each of you to speak to it. I think the idea is we're going to go from big to perhaps more specific, but call on your experience, of course, and your expertise. AI is, of course, very rapidly being integrated into all of our devices and our phones and our search engines and more. So, how would you say this integration of AI into these everyday touchpoints of ours, how are they affecting or how have they to date affected academic culture, including, of course, academic integrity, the other AI, and importantly, student learning? And I guess I'll start with you, Simon. 

Dr. Simon Bates:
Yeah. Thank you. So I think you're absolutely right. We're in that space, going into that space where the normal way that we're going to be interacting with AI tools is really going to be embedded within the systems and applications that we use regularly. I know 18 months ago, we got used to going to separate websites like ChatGPT to experiment with these tools. But more and more, these capabilities will be baked into digital applications that we use. We're seeing that already with with sort of Microsoft products and other applications. And I think it will, will continue. I think another thing to acknowledge is we're all playing catch up. The speed at which this is developing, contrasted with, institutional agility, if I can put it in that way, is a challenge for all of us. And, and, and having to play catch up is not a comfortable position for any of our institutions to be in. I think this is disrupting many aspects of what universities do, teaching and research. We're talking today about academic integrity, but it's only a short walk to to think about topics like scholarly integrity as well, which is very much a live conversation for us. And I think why this is such a challenge is fundamentally, much of what we do with our students is based around the acquisition of knowledge, the integration of that knowledge, the synthesis of it with what they already know. And then we evaluate students by evidencing that, that process. And that's really challenging because it affects everything we do. I also think it brings opportunities. I am sure during the next 50 minutes or so, we will be debating many of the challenges and struggles that we're having with these technologies around ethics, around privacy and all things like that. But I do sort of want to start on a positive note by saying this is prompting many conversations around teaching and learning. It's prompting many and really rich conversations about what's the nature of a university education in an increasingly digital and AI integrated world world, and what's the role for us as educators? Well, what types of graduates should we be aspiring to produce? So I think that's a positive thing, notwithstanding the challenges. The other thing that that I hear and I want to take a moment to acknowledge is, in very different contexts, this is the second time we've been through a really major upheaval in the last five years, and I really understand the sense of, 'Oh no, not again.' When, all of a sudden, everything feels like it's out of equilibrium and there are these major challenges. Five years ago, four years ago, with the COVID pandemic, it was really around delivery and architecture of our courses and having to adapt that at scale. Now, we're facing these challenges in terms of securing and maintaining the integrity of our assessments. So, keeping it fairly high level, that's a few of the touchpoints that I've seen.

Dr. Noreen Golfman:
You know, that's excellent. Given us a lot to think about. And, and you're right, I think most of us boasted that our institutions were able to pivot, that became the kind of buzzword, which is almost empty of its meaning now, but we boasted that we were able to do that during the pandemic. This seems much more urgent and bigger, if that's possible, since it affects just about everything that we have to some degree taken for granted. But I'll I'll turn it over to you, Tricia, for your thoughts on this?

Dr. Tricia Bertram Gallant:
Yeah, I think it's, I'm not sure if it's bigger, I think it's different, right? During the pandemic, we were all struggling at the bottom of Maslow's Hierarchy of Needs. We were focused on safety, security, survival, and we prioritized that which we needed to. And we met and we pivoted very fast, but not in ways that were good for learning or good for integrity. Right? We saw the explosion of cheating that happened during the pandemic, both because of the temptations and opportunities were more with all the remote exams, but also just because people were stressed and they were looking for ways to survive. This time, and I hate to say this because we've been here before, right, when the Internet came along, people were like, oh, this is horrible. This is going to change everything about teaching. And I wrote in my 2008 book that we were still teaching as if it was the 20th century and the Internet didn't exist. And I'm worried that right now it feels big, and I agree with Simon, it has presented, it has opened up a lot of conversations about what we could be doing and what we could be doing better and differently. But I'm still worried that that's not going to translate into action. And there's a couple of reasons for that. Well, besides the fact that, you know, Julia's institution is small, as she said, the classes are small, which is great, but the large part of us are dealing with really large institutions, with really large classes that, you know, the industrialized model of higher education. And it's really a bit more challenging to do some of the things that I think we should be doing, like smaller, engaged, active classrooms where people, students will need us for human-to-human interaction and human-to-human learning experiences. They don't need us for knowledge and information. And they honestly haven't for quite a while. So that's one reason why I think it's going to be difficult for us to really take advantage of this interest in being different or in changing. The other one is a kind of a fundamental structural problem in a lot of higher education institution is faculty. We hire faculty based on their disciplinary expertise. And they, in most PhD programs, they're not trained how to teach. They're not trained how to design courses, are not trained how to redesign courses or to design valid, reliable assessments. And they're not given the time and training and pay or rewards to do so once they are in their faculty positions at a lot of places, not everywhere, but in a lot of places. And so, I've, I've been known to say I feel like when these when these companies release these tools, we became hostages on a plane that we're trying to rebuild as we're flying it. We didn't ask for these tools. We didn't want them. We can't just stop teaching for a year. Hey, let's all close down and, you know, redo what we're doing. And so faculty are being expected to just do something that they have no time. A lot of them don't have time or training to do in their in their spare time and right at nights or on weekends, without pay, and I and that's a fundamental, unfairness that exists, you know, I think in our system. And so I am optimistic, as Simon is, but I like to say I'm, I maybe this is a Canadian in me, too, but the pessimist optimist like or realist optimist is maybe a better way to say that. I think we can change. I think we should change. I'm worried that we won't. Honestly.

Dr. Noreen Golfman:
Well, yeah, maybe we'll get to the question about who the we is, is it generational at our institutions. You know, maybe it's partly that maybe younger cohorts will be less stressed by the change, I don't know. I think I'm getting too old to be able to answer that question. Julia, over to you.

Dr. Julia Christensen Hughes:
Oh, my goodness. So many, so many really deep and important points have just been raised. I knew one of the reasons I said yes to being on this panel, because I really wanted to hear what Tricia and Simon had to say. I think like any, you know, significant shock to a system, people are going to have different reactions. And it's interesting that the pandemic has come up because I think so many faculty and university staff were really feeling burnt out. And and I also think and it's a little bit different in the US versus Canada, but there's additional strain on the whole system. And of course in Canada, lots of underfunding being realized and class sizes going up and TA support being pulled back. And I've heard from so many faculty teaching at public institutions now with these really large classes that they feel they have lost control. And they're quite convinced that their students are using generative AI, typically people, of course, just refer to ChatGPT, in the submission of all of their work, and they don't have the time or energy to catch it. And I know we received a lot of questions in advance about how do I block this, how do I detect it, how do I catch it? And in my view, and I think it's similar to Tricia's in assignments, there's no point in that. Sort of the gotcha moments. It is here. Just like the calculator changed everything. Just like the printing press changed everything. But like Tricia's saying, it's so funny how much of academia has remained sort of lecture and testing for short term recall or assignments that are not thoughtfully created. But we thought everything was going to change when the radio was invented and then the TV, if you remember those conversations or what's the point of a class? Because now we have TV and then it was the internet, and now here it is, ChatGPT. And I had the pleasure of being at the STLHE conference last week in Niagara Falls and attending a number of sessions on AI, where faculty from every discipline were presenting and actually having us play with it, and really creative ideas for how AI can drive more thoughtful, impactful learning. And the biggest shift I see that we have to make here is we have to reorient our brains from thinking we are teaching stuff to facilitating the development of skills and values. And one of those skills is to be able to discern the relevance and accuracy of information. And oh my goodness, does AI ever provide us with a beautiful opportunity to do that. But we can also create so much more effectively now. We want students to be doing. And I think for people who are not those early adopters that were presenting so effectively last week, but are scared of it, I'm just going to quote Nike and say, just do it, jump in, give it a try, and you will be astounded at how quickly you will personally experience its strengths and its limitations. And I think one of the most important things we need to do is to learn the app ourselves and to facilitate that learning of our students. It is a powerful tool with huge ethical implications. And what I'm proud at Yorkville, we've just embraced what we're calling our eight signature learning outcomes. And digital competency is one of those outcomes. And now we're exploring how we embed that in each of our programs, because the jobs our students graduate into, if they are digitally competent, know how to use AI effectively and ethically, they will be the front runners in the job market. So I see it as a question of our integrity to prepare our students for that future they will undoubtedly be encountering. 

Dr. Noreen Golfman:
Julia, I just want to pick up on something you said and just ask you to comment really briefly, and maybe others can jump into this. You said it's important to understand, or maybe I don't have the beginning of the statement quite accurate, but its strengths and its weaknesses. It's important for us to know that. Could you give me a nutshell sense of what the strengths and weaknesses are?

Dr. Julia Christensen Hughes:
Yeah, one of them, in my own experience, it runs entirely counter to academic tradition of very precise referencing. Right. Like where did these ideas come from and who said what and what has been paraphrased and what is a verbatim quote? So, when I was playing with this under the following my own advice of just, you know, just do it or just try it, I actually fed into ChatGPT all of my text from one chapter of the book that I dropped in the chat, and I asked it to create a PowerPoint for me. I was making a presentation at another event and I thought, well, I'm just gonna try it. I mean, ChatGPT is supposed to be a time saver. It's supposed to be really good at taking text and doing different things with it. And everybody, the advice always is be really clear and precise in your direction to it. Right? And keep refining what you ask it to do. So I asked it to create a PowerPoint. I asked it to use exact quotes that these were my words, so they were good to go, but I wanted to see quotation marks around it. And honestly, it didn't matter how many different times I asked it the question, it could not do a precise quote at all. Everything was paraphrased. I didn't mind, it's paraphrasing, but it was no longer my words. So in the end I gave up. On the other hand, my son recently asked, ChatGPT to write a song about me and my relationship with my granddaughters, and in seconds it produced the most lovely, heartwarming, original song. You know. So again, it's all about like, the skill you have to set it up, and what you're looking for it to do, and the degree of academic precision. 

Dr. Noreen Golfman:
Cool. Simon? 

Dr. Simon Bates:
Yeah. I wonder if I could just, sort of jump in and add on to what Julia was saying. I wanted to make a point about just the pace of change here. Certainly, in my experimentation with a variety of different models, I've had the luxury of being on sabbatical for the last 12 months, so I've been able to devote a considerable amount of my time to learning about and test driving and thinking about this. I can do things now with freely available models that I could not do 12 months ago. And this is what makes it tremendously challenging. We are aiming at a target that is accelerating away from us. And a week doesn't go by without something in either the technical or the sort of more popular higher education press, this model now outperforms this model. And frequently these models, the very cutting edge ones, I say cutting edge, the ones that have been released that are that are cutting edge, there are others further in development that we'll see over the next, you know, year or two, they're paid-for models so they create equity and access issues and whilst it might be tempting to jump at the latest and the greatest that does everything you could ever imagine, one of the things we should be doing is not putting financial barriers in front of our students to be able to engage, even if it's engage in experimentation with these tools. Alongside the issues around privacy and ethical use. We certainly shouldn't be designing learning activities that require them to pay to be able to, to complete that. We've done that in the past with online homework systems and resources that come along with textbooks. And I hope that we don't go down that path again with this.

"I can do things now with freely available models that I could not do 12 months ago. And this is what makes it tremendously challenging. We are aiming at a target that is accelerating away from us."

Dr. Noreen Golfman:
Terrific. Tricia, you might want to speak to this, but I was going to ask everybody, and maybe you can all speak to that if you have more to add. Or speak specifically to the question about what you see as emerging trends. I mean, I say that in, you know, with the echo of Simon's comments about cutting edge being replaced every week, if not, even more rapidly than that. But, in this moment in time, can you speak to that? And how do you see those trends or where you might forecast what will emerge impacting academic integrity specifically? 

Dr. Tricia Bertram Gallant:
Yeah. So I guess the one thing I bring to the group that might be a bit unique is that I run an academic integrity office where students get reported for misuse of technology or any kind of integrity violation. And I have been meeting with all the students reported for misuse of generative AI in their assignments. And one trend that's not coming from the technology but from the students is their use of the tools in a way that they would use any other tool, which is this is going to sound super harsh and I don't mean to, but it's there, so I'm going to use it without thinking about it. They have no idea how they work. They have no idea why they're using it over something else. There's a lack of critical thinking and knowledge about the tools. And, you know, of course we could say that's our fault because we haven't taught them. But again, we're trying to rebuild the plane as we're flying it. And Julia's point is most many faculty haven't tried to use it. So they don't understand how it works either. And it's just super difficult to educate 40 some thousand students in my institution about what these tools are. And our library has an amazing page up to to teach them that. But how do you get them - how do you get them to that? So I think the trend is if we don't find a way to reach students to do better in educating them on whether you want to call it AI literacy or digital literacy or just critical thinking about any tool they use, broaden it beyond artificial intelligence, it's not just about that, then they will continue to to use them in ways to find shortcuts for the hard work, to reduce the friction of learning. And when we reduce the friction of learning, maybe the learning isn't happening, right? So when I'm meeting with students, they're saying I just I'll say to them, well, why do you use it to why do you ask it for an article on a particular topic? Oh, I just thought that was better than doing a Google search. I said, well, you know that it makes stuff up, right? And no, they didn't know that, didn't even know that. And so the trend is, I think students cognitively offloading to these tools like any other tool in order to get more things done in the time they have for the grades and the degrees that they want. And so regardless of the technology trends, that's the case that I think we should be aware of. And how do we stop or intervene or, you know, exercise our educational leadership with our students to to help them not go down that trend? To Julia's point, sure, employers might want students know how to use these tools, but that's not what they're doing by using them. It's kind of like saying our students know how to work in teams, because we've thrown them into groups without teaching them how to actually work in groups. Right? That's not how they learn to collaborate. And I put this in the chat and I think this just points out again, I want to broaden it beyond artificial intelligence. We've got to get our hidden curriculum out from underneath the covers and start teaching it intentionally. Julia mentioned they have I forget how many competencies, Julia.

Dr. Julia Christensen Hughes:
8.

Dr. Tricia Bertram Gallant:
We have 12 at UC San Diego, but they're all hidden. They're all just we assume they're getting taught somewhere, and we really need to be more intentional about cultivating these human durable skills that can be applied to their interaction with these machines or with whatever other technology comes along. 

Dr. Noreen Golfman:
Yeah. Amen to that. Or A-something to that. Since I've got you here, Tricia, I'm interested in how you see, you've been in the States for, what, 22 years? 

Dr. Tricia Bertram Gallant:
Almost 24. 

Dr. Noreen Golfman:
24. And that's a long time to be acculturated. What do you- 

Dr. Tricia Bertram Gallant:
California, though? You know, we're different here. 

Dr. Noreen Golfman:
Yeah, true, not really quite this or that. I'm just wondering what you understand to be specific to US in this space we're talking about some of the challenges or trends that, as a Canadian living in the States, you know, what do you see as challenges or differences? 

Dr. Tricia Bertram Gallant:
Actually, I see the challenges for the Canadian system as similar to the challenges for the American system as compared to the UK, Ireland, Australia, New Zealand, is that we do not have a federal oversight of higher education. And so that means that we can be kind of slow at responding to things like this. Right? Australia and UK and Ireland are doing such a better job of responding to this from their quality assurance bodies than we are doing in North America. On the American side, what is the difference is we don't even have quality assurance in America. We have accreditation, which is very different. And honestly, we haven't been able to get the accreditation agencies to care about academic integrity in the 22 years that I've been working on it. And that sounds harsh. Obviously, they care about integrity. I should rephrase that. We haven't been able to get them to put in their standards that institutions have to demonstrate how they are attending to the integrity of their degrees. And they do it a little bit for online education. And I think if there's that, but they don't do it for face-to-face. We have this assumption, I think in both Canada, frankly, and the US that that integrity just happens. We just assume it's going to be there. We don't want to talk about it. We shouldn't. We feel like we shouldn't have to talk about because that says we don't trust our students or faculty, which is, you know, really ridiculous because our students and faculty are human beings. So of course they're going to make mistakes once in a while that undermine the ethical core that we might want to be holding. And so if anything that artificial intelligence does for us, I hope it helps us reveal, discover, be okay with talking about our humanity and what we're good at and what we're not good at, so that we can interact with these machines in a more human way and stay in control, stay in the loops and beyond being in the loop, but being in control. And so sorry I deviated from your question a bit, but I don't think we're as different as we think we are. I think the decentralization of the authority over a higher education has pros and cons. And I think that, other than that, though, I don't see a lot of differences in this particular area, but maybe I'm blind to them at this point. 

Dr. Noreen Golfman:
I wonder whether Julia or Simon have thoughts from the Canadian perspective. 

Dr. Julia Christensen Hughes:
Yeah, I think it's really interesting. If it's okay, Noreen, if I jump in. 

Dr. Noreen Golfman:
Yeah. Go ahead. 

Dr. Julia Christensen Hughes:
To see what's happening within higher education on either side of the border. One of the things that I've been really paying a lot of attention to is just sort of the political context. And in the US, you see Republicans and their confidence in the higher education system really plummeting. Whereas the Democrats, tend to still have a relatively high regard for it. But this question, that has always been on my mind is, you know, what is the role of the university and society and sort of what is the promise? And how well is that promise being delivered and received. And a NANOS poll two years ago in Canada showed that universities and colleges were at the very highest peak of social confidence that Canadian society felt that Canadian universities and colleges were making more positive impact on Canadian society than any other social structure. But we were declining. We declined several points. There's been a lot in the news about integrity in Canadian higher education lately and a lot of damage being done to our reputation internationally. I don't know if that poll was done today where it would land, but I suspect we would have gone down even further. And again, as I said in my intro, given that public colleges and universities have gone from being publicly funded to publicly supported. Where less than 50% of their revenue is coming in the form of government grants. There is going to be increasing pressure, increasing financial pressure on all kinds of public colleges and universities. And again, I think that's going to drive to the sense of overwhelm. It's going to drive class sizes. And one of the things I'm really proud of, being a private institution, and Tricia mentioned this briefly, is our small class sizes. And the fact that we're private, we can also be, even though we have absolutely rigorous oversight and quality assurance processes and all of that, we can be a little bit more in control of our own future. And that's why our average class sizes are 20. And that's why we can have skills and values based signature learning outcomes, because we have the opportunity for a closer relationship between faculty member and student. So it's more of that apprenticeship model. It's more of that oversight. It's not to say that we're perfect because we're not and we're working hard on all of this. And the big project we have underway right now. And again, back to Tricia's point. It's a very detailed map of the curriculum and each of our programs against those signature learning outcomes. It is about being explicit - where are different skills being developed, to what extent, and that scaffolding over the curriculum and how are they being assessed. And assessment, to me is such a big part of this, because you can say we're teaching this and that, but then if you fall back on traditional assessment or if you have no secure assessment that you really don't know who has done the work, well, then the credibility of your degree becomes suspect. And then the confidence society has in the graduates who you're producing goes down. So it's a really interesting question on my mind is how do we engage thoughtfully with students, facilitate the development of their skills and their self-efficacy belief, their competence in various domains. I think we're going to have to get way more creative in our pedagogy and in our assessment. But how are we going to do that in the public system with ever increasing class sizes? I don't know what the answer is to that, but I bet my panelists do. 

"It is about being explicit - where are different skills being developed, to what extent, and that scaffolding over the curriculum and how are they being assessed. And assessment, to me is such a big part of this, because you can say we're teaching this and that, but then if you fall back on traditional assessment or if you have no secure assessment that you really don't know who has done the work, well, then the credibility of your degree becomes suspect. And then the confidence society has in the graduates who you're producing goes down."

Dr. Noreen Golfman:
Yeah. Simon, any any reflections? Solutions? Magic bullets? 

Dr. Simon Bates:
Several reflections. I'm not going to promise anything in the other two categories. I just want to take a moment to to acknowledge the conversation that's flashing by me in the chat. There's a fantastic engagement. You know, we're seeing some of the challenges and the thorny issues play out right there in the chat. And what I would hope is the Studiosity folks might be able to curate some of the terrific links and suggestions from that and share it with participants afterwards. I'm also going to do a bit of a plug if I can. I saw a couple of references to Ethan Mollick's book Co-Intelligence. One of the groups that I've been involved in over the last 12 months is the AI Observatory, the Higher Education Strategy Associates, or HESA. Many Canadian attendees will know Alex Usher's daily blog, taking a break for the summer at the moment, One Thought to Start Your Day. The the the summer class or summer activity for that AI Observatory is a book reading club of Ethan Mollick's Co-Intelligence book with HESA and McMaster University in Canada leading on that. When I get a moment, I'll put it into the chat. I'm not as good at multitasking as some of my, panelists colleagues. So great to see the engagement in the chat. Noreen, if I could just be that awkward panelists that goes back one question to offer something that we were talking about, about supporting students in training and developing an ethical understanding of how to use these tools. It is so evident to me that we are going to need to help our students, all our students, develop a solid understanding of AI tools - their affordances, their limitations, and not just prepare them for the tools they have now, but to be able to sort of see where these tools are going. Otherwise, how are we going to prepare them for 40 plus years of employment in whatever organizations or careers they go into, what they are using now and they are using it now, whether we think they are or we tell them that they should be or shouldn't be, is just the beginning of their future, again, to point the link to to Ethan Mollick. He has said, 'The AI will use today will be the worst you will ever use.' It will be getting more sophisticated. So we need to support students and faculty, but let me focus on students for a minute. What that's going to look like is going to vary by discipline, by context, and by need. So I'm not suggesting that every student needs to go out and know how to build AI applications on top of a variety of different LLMs, but they do need to feel literate and confident, and that's not easy in a fast changing landscape. I don't think it can be an add-on, so it can't be entirely the AI course that you take for zero credit. I think that has its place in some foundational aspects, but it must be woven into the programs that they take in discipline and pedagogically authentic ways, within different academic domains. I think central units, I can't remember who it was, you mentioned the library, teaching and learning centres can sort of take on some of the basics here, they, what I'm calling AI 101 if you like, which is the sort of foundational aspects of what these that's not efficient for every program to be producing. But beyond that, what it looks like in a discipline is going to be very different. If you're in computer science versus data analysis or healthcare discipline or creative and performing arts or something like that. It's a balance between what you can provision centrally and what has to be embedded locally within the program. All of which sounds very aspirational. Let me just close this segment, but with a practical starting point, and I'll frame it as a question. I think a good starting point is to get a handle on how much you know about how your students use and conceptualize AI tools. What do they think? How do they think they work? What do they think they can use them for? What are they using them for? And how they think it affects their learning? Some kind of readiness assessment? I think that data needs to be surfaced because that provides your starting point. It will show you misunderstandings, it will show you gaps, and it will give you something practical to build on rather than just thinking, we know what our students need because there have been many examples with educational technology in the past where collectively, as educators, we've got that a little bit wrong. 

Dr. Noreen Golfman:
Yeah. I just wonder how many people who are attending to this are feeling they're at a standing start, and wouldn't, you know, what the next step might be? You're giving some suggestions, but we're in such a transition. We're in that kind of transitionary moment. There's so much to. Yeah. Go ahead and take it, Simon.

Dr. Simon Bates:
Well, you know, you you threw the question out, so let me try and answer it because it's something that I feel quite strongly about. And we've tried to embody in a lot of our teaching and learning development practice. And that is to embrace students as partners in this approach. I think there's a huge amount to be learned from the insights and the creativity of our students. Rather than framing this as something which we figure out and is done to them as an end user. And so, you know, as institutions think about how are they engaging students in these conversations. I would bet for many institutions, it's an elected student leader serving on a working group or on a committee or on some kind of governance or organizational structure like that. And in the case of our institution, we do that. We do that on many of our committees. Across our two campuses, we have 75,000 students and it's completely unrealistic to expect a handful, you know, 1 or 2 student representatives to be able to speak to the student perspective. Because just like the faculty perspective, the student perspective is multifaceted. There are students across all sort of areas of the continuum on this, from those who are actively using and building these tools to those who just don't know where to get started, who don't know if it is going to fall foul of some academic integrity approach. So again, without sort of digging into the details of how, I want to put a plug in for advocating for students as partners in this work that we all have to do and collaborating with other institutions because we're all trying to figure this out. 

Dr. Noreen Golfman:
Yeah. I think that's excellent, excellent advice. And it seems so self-evident and elegant, but so difficult, I think for many, many instructors to do, and, you know, that speaks to the cluster of questions we have from so many people providing input, which is how do you motivate instructors to redesign their assignments, you know, at this stage of the AI game? And what does authentic competency-based assignments for students actually look like in this? I don't know whether in a post-AI world, we're just moving into an AI world. And Tricia, I know you have some thoughts about this. 

Dr. Tricia Bertram Gallant:
Yeah, I was I was realizing we're getting short on time. So I was kind of just jotting down if there were three things I wanted to get across before we ended, what would they be? So, two things to follow up on Simon's points. So two for instructors. Instructors can gather students' thoughts and ideas about AI, both AIs, at the beginning of the quarter or sorry term. Right. Ask them what what tools are you using? How are you using them? How are you finding them helpful or not helpful? What do you not know? Or you need to know? You know, just getting their impact, their experiences and learning from your students about where they are. I've been asking everybody that comes to one of my presentations or workshops, are you, you know, where are you on Dreyfus's expert novice to expert continuum, right. Are you a novice? Are you an expert? And getting people to identify themselves along that continuum. And so having our students do that might be helpful too. Then co-creating with your students your academic integrity policy for the class. This does two things. One, when students hear each other talk, they will say a lot of the same things you would like, you have to come prepared to class, you shouldn't phone it in on your assignments, you shouldn't ask ChatGPT to do your assignment for you. Most students are going to say these things, and when students hear from each other, that's more likely to influence their behaviour because peer norms or perceptions of peer behaviours is a strong driver of student integrity and/or cheating, and so having them share their idea. One idea is you can look, you know how faculty often complain that students don't read the syllabus, right? And students don't look ahead and make plans. So go through the syllabus, go through the learning objectives, go through each assessment and say okay, so this assessment is connected to these learning outcomes, would it be ethical to use AI on this particular assessment? And if so, how? And come have the conversation with the students maybe asynchronously, some synchronously. And and creating that together will create a sense of community and a feeling like, you know, we're in this together. Of course, the faculty member can always veto anything that, you know, that would totally undermine the learning objective. And here's why. But having that conversation can be really helpful. So that's two points for instructors. Just ask students about their use of AI and what they know about it, what they don't know about it, and then co-create with them your academic integrity/artificial intelligence policy. For institutions, I would say my one piece of advice, I put it in the chat, make it someone's job and I don't mean artificial intelligence. I actually think that that should be multiple people's jobs, like folks like Simon who are running teaching and learning initiatives. They need to know what they're doing. Our ed tech people obviously need to have some knowledge of artificial intelligence. Campus wide committees as Simon mentioned, can be really helpful. But I meant academic integrity. Make it someone's job. We, Julia hinted at this earlier, but I'm going to be very explicit about it. We are not just in the business of education. We're in the business of certifying. We certify students, graduates, to say to the world, this graduate has knowledge and skills that you expect of somebody graduating from our institution. And we are not doing a very good job, Canadian institutions are doing a better job than American ones on this one, of having people who are focused on degree integrity and that's integrity in teaching, learning, and assessment. And so for institutions, think about one, how are we focusing on degree integrity? How are we ensuring that that's a current that that is at least kept to an acceptable level of corruption? And two, how are we and one way to do that, I guess I should say, is with secure assessments, Julia mentioned in her, at her university because the classes are small, they could really go to an apprenticeship model where where faculty could watch students doing something. Literally, they could write with them, they could watch them doing a lab, they could be that apprentice. In bigger classes, bigger institutions, we have to figure out how else to do that. And one way might be computer-based assessments, where we can incorporate more mastery-based stuff, where students can try and try and try to demonstrate or learn, learn and demonstrate their knowledge over and over again. So those are the three things I would say, two for instructors, what they could do in their individual class. And for institutions, really thinking about what are we doing to support faculty in ensuring and students in ensuring that our degrees have integrity?

Dr. Noreen Golfman:
Thanks. Yeah, that's very helpful. Julia. You must have lots to say to all this. 

Dr. Julia Christensen Hughes:
Oh, so many things. And like Tricia, I'm watching the clock. So, first of all, I just want to express some, empathy, I guess, for faculty who are going. But what the heck, right? Like, I used to feel like I'm an expert and the world is changing. And now I have this sinking feeling that maybe my students know more about this than I do. And I just think to feel better about that, think of the power of co-learning that Simon and Tricia have been talking about. And you can even say to your students like I need to learn about this, too, right? Our world is changing. I'm going to model how I'm learning. I want to see how you're learning about it and and really, like, embrace the vulnerability of that. I know that that might be a little bit difficult for some faculty, but like Trish was suggesting, I think you'd be amazed at what you could learn from your students and the inspiration you could get from that. And then, you know, really say, okay, so now with all of this learning we've had, like, what are our conclusions? And so again, you're modelling critical thinking and figuring out application. But I also wanted to end on a couple of other points that we really haven't touched on, because we've really been focusing on the level of the course or the level of the class, and I want to bring it up to curriculum, and then I want to bring it up to the institution. At the curricular level, we need our curriculum committees acknowledging these significant shifts that are happening in the world and really asking about what is the purpose of this degree in an AI-infused world, like, is this profession that we're preparing our students for even going to exist anymore? Or more likely, it's going to exist, but in a very different form in terms of those foundational or transferable skills that are needed. So I think we have to be baking in time for those really big questions and engaging thoughtfully with industry. You know, you can go to the research that's being done on how many professions won't exist in ten years or will exist in fundamentally different ways as the trivial, repetitive tasks of that profession have been replaced by AI and an elevated expectation of people in the role. So I think those are really big questions that I would say maybe exists at the level of curriculum or the profession. But when I asked, ChatGPT, what were the three key messages I should be sharing today? They were all glowingly supportive, I know, about the use of AI and and they actually talked about like institutional efficiency. So how could AI help in the Registrar's office? How could I help in communications or marketing of the or in the accounting and finance department of our universities? Could an institution employ AI in a way that builds in some efficiencies? Every single university is being challenged by its faculty these days for being top heavy. Does this present an opportunity? The other big deal thing ChatGPT wanted me to share is that it can be very good at customizing learning experiences and providing students with prompts. So maybe you've got this huge class, and this student hasn't been engaging and maybe artificial intelligence knows that and can send a little prompt to the student and they can feel less anonymous or less hidden. I don't have personal experience with those things, but I just wanted to introduce that, that the implications of this goes way beyond the level of the individual classroom. And I believe every single university and college needs a senior level committee that is saying, 'This is the future, what are we doing about it?' And it has to be a strategic approach because what I've been told just from any kind of organizations point of view that might be using it, you know, to attract customers or do whatever processes that if you put data into it, that data is now out there. So there's some profound implications around sort of, organizational security, information security, if you will. And so this is certainly going to be a priority for Yorkville in the coming year, to develop the strategic plan around the use of AI at large. 

Dr. Noreen Golfman:
Yeah. All right. That's very important. And, you know, speaking of that dreaded phrase, strategic plan, I wonder what strategic plans will start looking like in view of of this kind of emerging world, for sure. I've got one last question. Maybe you can speak briefly to. And maybe it's an unfair question, but do you think the definition of academic integrity is changing in view of all of this? Are we coming to a new understanding of it? We kind of fairly narrow, I would say, since force disciplines vary. But, I think it's fair to say that the whole notion about academic integrity is being challenged in ways that are probably good. Trish, your hands are like Rocky, go ahead.

Dr. Tricia Bertram Gallant:
Fond of this one. So it depends on how you define it. International Centre for Academic Integrity defines it as the courage to be honest, respectful, fair, responsible, and trustworthy, even when it's difficult to do so. That doesn't need to change. Right. When we. So I think people sometimes define academic integrity as its antithesis, which is by cheating behaviours, which is not true. Right. So we want to use artificial intelligence, sometimes depending on the learning outcomes for the course, maybe sometimes we don't. But when we do, we still want it to be an honest representation of the students' knowledge and learning and for it to be honest it needs to be transparent. So did the student use - what tools did the student use or what people did the student use to help them? And I think for a long time, we've actually been doing students a disservice by not asking them to acknowledge basically the metacognition stuff, all of the things, people, tools that helped you get to this final product. And so, for example, we'll say, a faculty member might say, you can work with other people on this, but what you submit must be your own work in your own words. What does that mean? Rather than saying, you can work with other people on this, but I'd like you to acknowledge them in a statement, and I'd like you to say to reflect on how they contributed to your final assignment, how they contributed to your thinking, and have students be more aware of, like I said, the thinking about their thinking and so we should be doing that. So in fact, I think instead of changing the definition of academic integrity, artificial intelligence needs to remind us that we have not been doing a good job of inculcating academic integrity in our cultures and in our students prior to artificial intelligence and probably post artificial intelligence, if we if we continue to think of it as narrowly as we have been.

"International Centre for Academic Integrity defines it as the courage to be honest, respectful, fair, responsible, and trustworthy, even when it's difficult to do so. That doesn't need to change [...] we want to use artificial intelligence, sometimes, depending on the learning outcomes for the course, maybe sometimes we don't. But when we do, we still want it to be an honest representation of the students' knowledge and learning and for it to be honest it needs to be transparent [...] I think for a long time, we've actually been doing students a disservice by not asking them to acknowledge basically the metacognition stuff, all of the things, people, tools that helped you get to this final product."

Dr. Noreen Golfman:
Yeah. That's excellent. Excellent. Simon. 

Dr. Simon Bates:
Yeah. Just to just to build on that. I really like the resiliency of that definition of academic integrity, though I do think and someone mentioned, I think Julia mentioned Sarah Eaton earlier in this discussion, one of her sort of six tenets, around plagiarism is that, historical conceptions of plagiarism are going to have to change in the light - 

Dr. Noreen Golfman:
Exactly. 

Dr. Simon Bates:
Light of generative AI. What what Tricia was just talking about, I think, is what many faculty are grappling with, and that is making the process of learning more visible in assessments rather than simply relying on evaluating the product of learning. So it is that process of, how did you formulate your ideas for the outline of your essay?For the first draft of your essay, what sources did you use? What did you do with your peer feedback that you got from your colleagues when we did that exercise in class? Making that process a lot more visible and de-emphasising how much weight we put on the final product. And I don't necessarily mean grading weight, but I mean time that we spend looking at that, I think really is the essence of the sort of challenge of assessment redesign that we're faced with. 

Dr. Noreen Golfman:
Excellent, also. Quickly, Julia, any last word? I'll get the last word, but go ahead. 

Dr. Julia Christensen Hughes:
Okay. Well, just really quickly, we've been talking as if this mostly applies to students. So what about faculty? And what about faculty research? And I've been absolutely intrigued to see the debate happening in academic journals as to whether or not ChatGPT should be listed as an author. So this is back to this point of acknowledgement. And the general agreement seems to be, no, it can't be because it cannot be held accountable for the work. But if it has been used in producing the first draft of the literature, like the literature review, that should be acknowledged. And so I think the key point around integrity for me is accountability. You know, that you're accountable for the accuracy of the work. Right, this is back to this notion of the precision. What built on what? What was an exact quote? What was paraphrase? Where did these ideas come from? And so, it can absolutely be such a helpful tool in terms of a first take on something like, what are the key - if I'm writing on this topic, what are the key concepts I should be touching on? Oh my goodness, you've got your list of 12 right there. And if you're working with your grad students, now you can direct them where to do deeper dives. But at the end of the day, when your name's on it, you're accountable. 

Dr. Noreen Golfman:
Well, that's a good place to end, and we are out of time. And, I couldn't possibly summarize everything. I mean, the chat has been exploding for the multitaskers who've been deeply engaged, a skill in itself. I want to thank the panelists for providing so much insight. I really feel that we could keep going, and I hope we don't have this as a time capsule that we look back at 20 years from now and think how ridiculous all this was in view of all the changes, because the moral, ethical foundations of what we're talking about are, I think, really, really important for now and for the future. So thank you all. Thanks to all the participants. Stay tuned. I'm hoping we will do something with all this wonderful information that's been happening while everybody's been talking. And, hope to see you again in something very, very similar. So bye for now. 

Dr. Simon Bates:
Thanks everyone.

Visit Studiosity.com/studentsfirst for the next Students First Symposium. An open forum for faculty, staff and academics to candidly discuss and progress the issues that matter most in higher education.
  

About Studiosity

Studiosity is personalised study help, anytime, anywhere. We partner with institutions to extend their core academic skills support online with timely, after-hours help for all their students, at scale - regardless of their background, study mode or location. 

Now you can subscribe to our educator newsletter, for insights and free downloads straight to your inbox: 

Subscribe >>