Provost's view

Will the machines replace us? UCL's President & Provost, Dr Michael Spence, reflects on the impact of new AI technologies and what it means for those working in Higher Education

Image: Andrew Parsons / Parsons Media

Image: Andrew Parsons / Parsons Media

Earlier this year, a senior industry figure told me that, in his opinion, AI will be doing all the jobs of those of us in Higher Education in three to five years. It is a bold statement designed to get a response, and I think the reality is probably a lot less dramatic. But AI is certainly a massive disruptor, and a great deal of activity is now focused on how it is changing what and how we teach and assess in universities.

I am aware in broaching this subject that the technology is moving so rapidly that anything I say about AI and our approach now will probably be laughably out of date in a few months’ time. We need to be proactively alive to developments in the technology and its use, not least because so many of our students will be.

That is an ongoing process for educators. Fortunately, UCL is home to enormous expertise, with people who not only deeply understand the technology and its potential, but are at the forefront of developing it. Thanks to that, we are in a strong position proactively to embrace the opportunities of AI and creatively to manage the challenges. We have an AI Expert Group leading this work and our Generative AI Hub is bringing together all the latest information, resources and guidance on using Artificial Intelligence in education.

Rather than AI replacing us in doing our jobs, I think it rather more likely that knowledgeable humans who work with AI will replace at least some humans who do not work with AI. If that is right, we have to equip our students with at least three sets of skills;

  • the skills to do the things that the machines can do;
  • the skill to use the machines well;
  • the skills to do the things that the machines cannot do;

"We need to be proactively alive to developments in the technology and its use, not least because so many of our students will be."

In my view, the first of these tasks presents the most urgent, and in some ways difficult, of issues for educators. Despite the growing ubiquity of technology, people will still need to be able to synthesise information from multiple sources, to paraphrase, and to summarise, and to do all the things that generative AI does so well. In other words, we will continue to need the cognitive capacities that are mimicked by generative AI. We did not, after all, stop teaching arithmetic when we invented the calculator.  

But the task of teaching skills parallel to those of the machine presents real problems of assessment because an examiner will be asking students to complete tasks that can easily be delegated to the machine. That is why you cannot take a calculator into a maths test in primary school. AI-proofing our assessment processes is a really important challenge for universities, because we are still expected to award differentiated and verifiable individual degrees to individual students. At UCL we are approaching this challenge with the understanding that a well-designed AI-proof assessment strategy will contain a variety of assessment types. One size will not fit all across our diverse educational portfolio, but there are some immediate and obvious challenges for those programmes that have developed an assessment strategy with an over-reliance on takeaway types of assessment.  

"We are encouraging students to engage playfully and creatively with AI tools to see what they can do and how they might enhance their learning, while understanding and exploring their limitations."

Image: Ars Electronica / Robert Bauernhansl

Image: Ars Electronica / Robert Bauernhansl

The second set of skills, those needed to use the machines effectively, is also hugely important and also an area in which we are working. Ultimately AI is a tool and a massively useful one that students (and the rest of us) will need to be able competently to utilise in our workplaces as well as in our everyday lives. As you would expect, we are being proactive in embracing it in our educational offer and encouraging students to experiment with different technologies and discover how they can be useful, and also discover the limits to their usefulness.  

The technology can be used in education really positively, to unblock creativity and get ideas flowing, to experiment with different writing styles or improve grammar and writing structure, especially for students whose first language is not English, for background research or to debug code. We are encouraging students to engage playfully and creatively with AI tools to see what they can do and how they might enhance their learning, while understanding and exploring their limitations.

One set of limitations to the technology with which our students must grapple are its ethical limitations. This technology does not arrive clean in the world, but bears the fingerprints of those who made and programmed it and of the human-generated material on which it draws, and therefore is as partial, biased and flawed as all of us. Our education must include the ethical safeguards that should be employed when using a particular AI tool. Again, we are tremendously lucky that our research expertise in AI and machine learning, and in particular in AI in education means that we have long been at the forefront of building both technical and intellectual capacity into our teaching and learning.

But teaching the third set of skills, those to do things that the machines cannot do, also remains important. I am confident that, while this technology will certainly support us to do better and more efficient jobs and take some time-consuming burdens from us, human skills, which precisely rely on the fact that we are not computer algorithms, will always be crucial.   

Arguably, that includes the kind of unexpected, especially cross-disciplinary, insight that generative AI will not be able to find in the existing stock of knowledge because it represents a genuine addition to the corpus of that knowledge. That is, at least for the foreseeable future, a distinctively human kind of creativity.  

But it also includes other kinds of skill. Take an example. UCL and Moorfields Eye Hospital has been working with the amazing, alumni-founded AI company Google DeepMind to develop machine learning technology that can interpret eye scans to spot signs of disease at a speed and accuracy that no human can replicate. So in one sense, yes, the machines are replacing humans at a job at which they are far, far better. But the work the machine is doing is to identify the people who should be prioritised for treatment by human healthcare professionals. It cannot replicate distinctly human skills, expertise and knowledge, such as critical evaluation taking into account wider context, the ability to weigh up options with nuance and awareness of the impacts, not to mention the emotional intelligence, communication skills and inter-cultural competence that are paramount to such work. This approach outsources routine work to the machine so that doctors can focus on doing what only humans can do. No matter how advanced AI becomes, human minds bring something extra.

However we approach these three different pedagogical challenges, it is a privilege to be doing so in a university that is equally good at the technology, the social sciences, and the humanities, needed to enhance our teaching and to navigate the future.  If there is a university that can work well to both develop the technology and to ensure that people are equipped to use it well, it must be UCL.

Dr Michael Spence is President & Provost of UCL

Privacy | Cookies

University College London,
Gower Street,
London,
WC1E 6BT

ucl.ac.uk

Portico magazine features stories for and from the UCL community. If you have a story to tell or feedback to share, contact advancement@ucl.ac.uk

Follow UCL