Author: Benito Cao, The University of Adelaide
Focus area: Making academic integrity visible
The sudden irruption of generative artificial intelligence (gen AI) in higher education has sparked widespread concerns regarding the viability of essays as a form of assessment. Indeed, the argument often goes that large language models (LLMs) such as ChatGPT signal the death of the essay. But do they? Will they? Or, to paraphrase Mark Twain, is it that reports of the essay’s death are greatly exaggerated?
This case study outlines a pedagogical initiative designed to promote academic integrity and secure the essay in the age of gen AI. The initiative reimagines the ‘two-lane’ approach that proposes the combination of secured assessments (lane 1) with unsecured open assessments (lane 2).
The reimagined approach resembles the two-lane model in its partial reliance on in person assessments to validate student learning. Yet, it challenges the unrestricted nature of the Lane 2 approach by illustrating the value and validity of a ‘middle lane’ approach which focusses on the ecosystem to foster and facilitate authentic learning (Curtis 2025). The initiative relies on a pedagogical ecosystem designed to develop trust between students and teachers notwithstanding that academic integrity requires that we ‘trust but verify’ in cases of potential academic misconduct.
The pedagogical ecosystem includes the following elements:
- an exploration of the potential for gen AI tools to fabricate information, with illustrations of real-world ‘hallucinations’
- the provision of clear guidelines, with references to university policies and industry standards to showcase the rationale and relevance of the guidelines
- the requirement to include a gen AI appendix when students use gen AI in the production of their essays
- a reminder that students are expected to fully understand every aspect of their essay, and that if there is a concern about the use of gen AI tools exceeding the assessment guidelines, they may be asked to discuss the assignment before the mark is finalised
- explicit advice to keep drafts, notes, annotated readings and any other materials students have used, as evidence of how their essay has been produced in case its authorship is questioned
- the reliance on secured (in-person) assessments, worth between 30% and 50% of the overall mark, to help validate student learning and to compare with the essay preliminary mark if there are academic concerns regarding the production of the essay.
This pedagogical ecosystem is designed to enable the (relatively secured) implementation of a ‘middle lane’ approach, that permits a limited use of gen AI. Specifically, students are allowed to use gen AI tools to assist with idea generation and language expression. For example, I tell students:
- if they struggle to come up with ideas for their essays, they can use gen AI but any ideas suggested by the tool must be validated
- while they can use gen AI to assist with language expression, they should not allow the tool to take control of the narrative, that the narrative should reflect their own voice.
In essence, students are allowed a limited use of gen AI but are expected to remain the authors of their essays and to be transparent regarding their use of gen AI tools. Students are reminded of this basic expectation of transparency in the assignment submission portal. This is the last thing they read before uploading their essay:
Don't forget to include a gen AI appendix if you have used gen AI tools (for example, ChatGPT, Copilot, Gemini, Claude, Grammarly, etc.) in the production of the essay. The absence of this appendix is equivalent to stating: I did not use GenAI. If this statement turns out to be false, this would constitute a breach of academic integrity. Remember the slogan: Don't be sorry, just declare it.
The approach which I have titled, Don't be sorry, just declare it, reflects the integration of four normative principles: caution, trust, relevance and transparency (Cao 2025). It is a slogan used by Australian Customs and Biosecurity warning people who arrive in Australia to declare all goods they might not be permitted to bring into the country, rather than apologise afterwards for the lack of a declaration.
The evidence suggests that this approach can go a long way in addressing some of the most urgent pedagogical challenges posed by gen AI, particularly concerns with academic integrity. The evidence also suggests that this approach can improve security of the essay and thus contribute to its preservation as a valuable form of assessment in the age of gen AI.
References
Cao, B. (2025). Don’t Be Sorry, Just Declare It: Pedagogical Principles for the Ethical Use of ChatGPT, Master Bullshit Artist of Our Time. In: 11th International Conference on Higher Education Advances (HEAd’25). Valencia, 17-20 June 2025.
Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research and Development. (Published online: 18 March 2025).