Initial Reflections on the Challenges of AI in Higher Education in Law: A Change in Understanding of ‘Assessment’
By Michael Randall - Posted on 24 April 2023
On the 17 and 18 April, I attended the Association of Law Teachers’ Annual Conference at the University of Westminster. This was as a part of my renewed focus on engaging with the discourse and debate surrounding legal education. Over the past 12 months or so, I have also been in regular attendance at the Connecting Legal Education group’s regular online sessions.
I presented two conference papers at the Annual Conference, detailing two case studies at Strathclyde Law School. Firstly, I outlined the changes made to group assessment on the Year 3 Law, Film and Popular Culture class, where we have moved towards recorded podcast-style discussions, plus an annotated bibliography – the paper explained the reasons for the changes, the perceived advantages relative to group essay and live presentation and the continuing issues in using this form of assessment. The second paper sought to communicate the experience of managing compulsory 4th year undergraduate dissertations, highlighting the challenges which are faced and some potential solutions we’ve tried to implement.
I am only at the relative start of the process of engaging in the field of legal education scholarship. The shift has largely been as a result of my experiences in the school of managing the wellbeing of students as the school’s Senior Tutor (now Director of Student Wellbeing) and currently the 4th year honours coordinator. However, the engagement has allowed me to reflect on the way that I currently teach and assess and ultimately what could happen in the future with regards to teaching and assessment. These discussions are ones which ultimately in the long-term will be had at Strathclyde internally.
The ALT Conference was eye opening and highlighted a whole range of ideas and suggestions (all of which are far too numerous to list in this blog post). However, one of the primary sources of debate is the rise of Artificial Intelligence in higher education. This issue is also reflected in the fact that the Connecting Legal Education (CLE) group has organised a series of online sessions focusing on AI platforms, such as ChatGPT.
Coincidentally, my colleague Professor Kenneth Norrie has also been pondering all things ChatGPT, and has written a blog post about the topic. As such, when speaking to the blog editor after the conference and Kenneth’s post about ChatGPT came up in a conversation, he suggested I might want to look at a draft of his prior to publication. Kenneth’s post focuses on the dangers of ChatGPT and his own experiences in using the platform to test how it operates in relation to presenting legal issues and discourse. Kenneth’s views and experience are interesting, particularly reflecting on his time as the Chair of the Law School’s Student Affairs Committee, which reviews cases of academic dishonesty and plagiarism. This is a role which – as of yet – I have not had in the school, and it is informative to have a colleague reflect on that experience and present the issues of academic integrity, kickstarting the discussion as to the appropriate response in higher education.
Having read Kenneth’s post, I agree with the initial sentiment on the dangers posed – the profession, lecturers, tutors and students should have an interest in the legitimacy of the LLB degree. Students should develop their own skills in critical assessment and review. It is not simply enough to document and repeat what the law says – you need to engage more fully, understand the rationale, and present a reasoned response.
The recent CLE sessions have been particularly interesting. Recently, Andres Guamaduz of the University of Sussex led a session detailing his experiences of integrating AI in the classroom, with a call to embrace it (including AI generated art and images for slides). As part of this discussion, he detailed the frank and honest conversation he had with students on how and why they use AI. He reflected upon how initially students feigned ignorance of AI, but were receptive to having a dialogue on how they use AI. This was not exclusively for writing essays. It was shortcuts – to summarise passages of text. The next session (at the time of writing) is to be held on 24 April by Lydia Arnold of Harper Adams University, titled ‘Exploring our Response to AI in Higher Education’.
The chances are that I have marked one or several pieces of work which have used AI at some stage in the process – it may have been to consolidate typed up lecture notes, key facts of a case, or to generate a title for a dissertation. This is alarming, and Kenneth’s post is right to point out that there is no shortcut to succeeding on the LLB.
Kenneth’s post does not endorse one particular course of action – it seeks to give some potential avenues class coordinators may consider. It finishes with the question “what else might we do? Answers on a (written) postcard”. Coming off the back of the ALT conference and some of the CLE online sessions, I think it is worth adding my two electronic cents into the discussion – what is the general discourse in the field, and what are my (current) take aways?
One of the potential suggestions in Kenneth’s post is that we need to review the use of language in question prompts. Class coordinators could adapt them in a way which confuses AI, due to the difference in the thought process between a lawyer and AI generation. It is true that the underlying rationale AI and lawyers use when processing a question differs. However, one of the papers presented in the same ALT Conference panel as the podcast paper that I presented was ‘The Curious Case of the Colliery and the Blanket– What on Earth do They Mean? The Language of Assessment’ by Dawn Jones, Lynn Ellison, Anima Sultana & Jack Whitehouse of the University of Wolverhampton. This paper documented the consequences of language in questions which was not clear (giving examples whereby students had not heard of a colliery and an example where ‘hen party’ didn’t quite translate, among others).
One point which was highlighted was that a neurodiverse student may be particularly impacted by certain prompts and use of language (for example, ‘critically analyse and discuss’ is two prompts). We also have an international cohort of students at Strathclyde, and many would be sitting assessments in a second language. My concern in light of this in changing the drafting assignment questions to trick AI is that we could potentially be exacerbating these issues further. This does not mean that colleagues need to reflect on the phrasing of their questions in the light of AI, but this is a fine line to balance, and in using unnecessarily complex language, we may be increasing obstacles.
The other option Kenneth presents to improve academic integrity is to return to more in-person assessment (exams and in-person presentations as two modes of assessment). On the former, I think there is still a place for in-person exams, but not exclusively. A closed book exam, in particular, can mean that the process becomes a memory exercise, in the same way that Scrabble isn’t really a vocabulary board game – it’s who can remember the most words in the dictionary. On the latter, the paper I presented on podcast recordings highlighted some of the benefits I see relative to an in-person presentation. In particular, I think there could be disputes about what was said in a presentation, and it is difficult to give an immediate mark, reflect and give feedback (particularly if you are – for want of a better way of saying this – calibrating the scale and may want to revisit earlier presentations). However, public speaking and presentation is a skill which we need to look to develop, particularly with people entering the legal profession. Even prior to being a ‘lecturer’ I had to speak in public on a number of occasions outside of the classroom (wedding speeches, funeral eulogies and job interviews). There is a place for in-person presentations (something which the Law Society of Scotland recognises), but they are not perfect.
So what other responses can we have when designing assessment in light of this? We collectively need to rethink what we mean by assessment – are we assessing core knowledge and understanding, are we building critical reasoning, or are we building transferable skills? It is very easy to stick to the traditional exam, essay and presentation format – these are more familiar to us as class coordinators.
However, engagement with the ideas and innovations presented at the ALT Annual Conference have shown just how broad-ranging assessments can be. After the podcasting paper I gave at the ALT Conference, an attendee asked me afterwards “why do you still have an individual essay? What are you looking to achieve by setting an essay?” They were looking to be supportive and help the class longer term with the benefit of their experience in the legal education world. They (quite rightly) highlighted that my response was about covering more of the core content, and that a revised learning outcome would lead to more effective assessment.
Some of this is accelerated by the changes to the SQE in England and Wales, and finding novel ways to teach and assess (whether that is moving away from lectures, or towards a more flipped classroom dynamic for core teaching). Ultimately, schools need to have a range of forms of assessment (a form of pluralism). Exams and in-person presentations can form a large part of this, but if we stick too closely to solely assessing via essay, I think that only increases the concerns Kenneth expresses. Having said this, the Law Society still accredits the core LLB degree, and that accreditation may require essays and exams to form part of the assessment. This may, therefore, be primarily suitable for elective modules.
The shift in methods of assessment can initially seem daunting, but it does mean that law schools can design assessment which develops a range of skills. One of the answers may lie in Kenneth’s own experience which he’s outlined. He asks a series of questions to the AI platform, and analyses their responses as being accurate or not, accordingly. He gives a comment on the grade that he would have awarded the essay and why, based on his skills and knowledge in the field. He has read extensively on the cases and examples, and he knows the relevant material which is/is not there.
In light of this (and reflecting some of the CLE discussion) class coordinators could have an assessment where students are presented with an AI generated essay in response to an assignment question (one which would still need to be approved by an external examiner). The task could then be to place the students in the position of a marker for the assignment – what mark would it receive, and why? Students would need to engage with the core case and discussions beyond the AI platform to highlight the inconsistencies, factual flaws, or highlight where an actual critical opinion has not been presented.
That would rely on appropriate guidance given to students (marking scheme and description of the actual task and expectations), but it is a novel form of assessment which would still ask students to be critical on engaging with the law, but in a different context. In that regard, AI has been used to advantage, rather than disadvantage. It might also help students later when receiving feedback from staff to understand more about the process of what goes into marking and assessment.
I do share Kenneth’s concerns about academic integrity, and with new tech I can sometimes be the person still using a gramophone in the age of streaming. However, the conversations and debates in the legal education community are interesting, and this challenge could create new opportunities. It is going to take some work and further debate. There may be some modules that this simply doesn’t work for. The debate on how we teach and assess has been accelerated by the SQE changes in England and Wales, and I don’t lose sight of the fact that Strathclyde is introducing a new curriculum which is accredited by the Law Society of Scotland – there are certain professional standards and modes of assessment on core programmes which have to be maintained. That being acknowledged, to me, the answer has to lie in using more varied forms of assessment and shifting what we mean by critical reasoning, engagement and transferable skills.