In October last year I was fortunate enough to represent the Canadian Advisory Board of Studiosity in Manchester on a panel focusing on AI, learning, and academic integrity at the annual World Academic Summit hosted by Times Higher Education. I was especially excited to be at the University of Manchester which boasts a fine reputation of encouraging world-changing research.
Our panel, comprising colleagues from the UK, faced a standing-room-only audience, clearly an indication of the hunger academic leaders have for understanding the rapidly evolving world of AI and its usefulness—or not—in higher education. The culture of tech innovation is moving at a fierce pace, and we know it. Moreover, our students are in many ways better wired to adapt to such innovation than seasoned educators are, and in some ways well ahead of our own understanding of AI’s practical potential. Fear of misuse and abuse, of ethical violations, is understandable.
Our panel aimed not so much at eliminating but in assuaging such fear drawn from our own experiences—and implicitly from the success we are seeing with Studiosity’s programs. Jean-Noël Ezingeard, spoke of the open and high-level conversations occurring at the University of Roehampton where he works, aimed at developing a framework for the institution that makes clear how best to harness the power of AI to serve learners and researchers. I am pretty confident that such formal conversations are still rare in higher ed, confined as they are for now to random gatherings in the mailroom.
My own Canadian take is that such ethical AI frameworks are not yet developing at our institutions with any consistency or intensity, inclined as we are to be highly sceptical of the benefits of new and radical technology. I recall not so long ago some of my own colleagues declaiming the dangers email communication.
I felt inspired listening to the insightful questions after our panel had concluded formal remarks. With more time we could easily have continued the discussion, but the Summit was a finely tuned machine, and we had to wrap it up, leaving us all keen to take the conversation forward. I should mention that for the first fifteen minutes or so our panel competed against a noisy pro-Palestinian protest in the courtyard just outside our windows, a humbling reminder that our students were taking on difficult ethical matters of their own. The world’s challenges grow more complex by the day, and we would be remiss if we dismissed AI as a dangerous turn for higher education. We owe it to ourselves to make the best possible use of a technology currently in high demand.