• Core Plus model for regulatory assessments policy

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type

    Related links

  • Other resources

    Banner with the text: Academic integrity toolkit - Other resources

    This section of the toolkit brings together a wide range of materials to support academic integrity from multiple perspectives. It provides guides, case studies and resources for educators and institutions seeking deeper insights and actionable strategies to strengthen integrity practices.

    Videos

    Summary video

    Academic integrity in Australian higher education – a national priority: Workshop video

    Academic integrity in Australian higher education – a national priority: 2025 update

    Slides
    Case studies
    Academic integrity organisations
    Translated resources
    Guides
    Last updated:
  • Academic integrity policy and procedure

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Dr Amy Milka and Amanda Janssen, Adelaide University

    Focus area: Developing and benchmarking policies and procedures

    The merger of the University of Adelaide and University of South Australia presented a unique opportunity to shape academic integrity policy and procedure for a new institution. In approaching this task, we leveraged the mature policies of both institutions and benchmarked emerging approaches from other leaders in this space.

    The resulting Adelaide University policy and procedure1 adopted some tried and tested approaches of the foundation universities, including different levels of committees, decision-making and resourcing for issues of different severity, and more recent innovations such as an ‘early resolution’ which offers a quicker, educative resolution in certain cases. Looking across the sector, we found that leaders in academic integrity were moving towards publishing matrices and clear rubrics for misconduct outcomes to ensure transparency and consistency.2

    Student and stakeholder consultation and feedback identified key features of the policies and procedures which were important to learners. These included:

    • clear and informative definitions of different types of misconduct
    • transparency about possible outcomes
    • efficient processes and timelines to allow student input and minimise impacts on academic progress and student wellbeing.

    Students were involved in co-creating and providing feedback on communications about the policy and procedure, including the letters sent during misconduct investigations.

    A unique challenge for our merged institution is the communication of changes to the policy and procedure to transitioning staff and student cohorts, who have awareness of historic policies and approaches. We have carefully considered this challenge in developing our academic integrity messaging.

    Clear and timely communication on policy changes is crucial to ensuring a shared culture of integrity as well as minimising misconduct issues based on lack of awareness or understanding.

    Building awareness of policies, practices and expectations is a cornerstone of academic integrity work at any institution to ensure a shared understanding among students and staff with different educational and institutional backgrounds, and different approaches to integrity. Successful policy implementation requires co-creation, visibility and clarity in the institutional message.

    Key lessons or points for implementation

    • Policies and procedures need to balance a range of competing considerations, namely transparency, fairness, efficiency, student experience and an educative approach (see image below).
    • Policies and procedures need to balance strategic goals with operational effectiveness, and consider issues such as workloads, systems and processes in procedural design.
    • An agile academic integrity policy and procedure requires a regular schedule for review.

    Image of considerations of academic integrity policy and procedure

    Notes

    1. Academic integrity policy
    2. For example, Deakin University and the University of Southern Queensland.
    Last updated:
  • Gen AI policy evolution at Southern Cross University

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Professor Ruth Greenaway, Dr Zachery Quince, Southern Cross University

    Focus area: Governance

    Southern Cross University (SCU) took a first principles approach to policy development, supporting a strategic goal of ubiquitous gen AI use and positioning gen AI as an educational tool. An initial, binary model, where academics either permitted or prohibited gen AI use, overlooked disciplinary needs, causing confusion for staff and students, and limiting meaningful engagement.

    Seeking greater inclusivity and flexibility, SCU transitioned to a five-tier gen AI model, informed by the AI Assessment Scale and supporting the assessment principles of the Southern Cross Model. It mapped a continuum from prohibiting use to open collaboration, specifying permissible uses. The model, though pedagogically robust, proved complex in practice, presenting challenges to staff adoption and consistent implementation. In 2025, SCU introduced the Gen AI Tool Use Descriptors, a pragmatic three-level model of assessment security levels. Assessments now explicitly indicate their gen AI stance at Level 1, 2 or 3.

    This approach is designed to normalise gen AI as part of academic practice while promoting accountability and meeting the learning and teaching objectives. It is embedded in formal assessment protocols, with specific gen AI guidelines available for each task, evidentiary requirements and a compulsory student declaration, fostering openness and ethical engagement.

    Implementation of the Gen AI Tool Use Descriptors is underpinned by the Gen AI Descriptor Use Staff Guidelines, which provide assessment specific scaffolding, best practice examples and clear, structured support tailored to different assessment types, enabling academics to confidently integrate gen AI tools into their teaching and evaluation processes.

    Grounded in robust research on ethical considerations and student learning behaviours, the guidelines help staff define task expectations, document gen AI use and navigate the complexities of balancing gen AI’s benefits and risks. These measures strengthen academic integrity by promoting ethical engagement with gen AI and fostering a culture of transparency, consistency and accountability.

    Key lessons or points for implementation

    • Establish a structured approach introducing models of gen AI use with clear guidelines for staff and students.
    • Adopt proactive educative strategies that provide comprehensive resources, and examples to support both staff and students, to ensure confidence and clarity in implementation.
    • Encourage a culture of ongoing adaptation in response to gen AI advancements and evolving industry practices.

    A list of 3 gen AI descriptors

    References

    • Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(06)
    • Quince, Z., & Nikolic, S. (2025). Student identification of the social, economic and environmental implications of using Generative Artificial Intelligence (GenAI): Identifying student ethical awareness of ChatGPT from a scaffolded multi-stage assessment. European Journal of Engineering Education. Advance online publication.
    Last updated:
  • Hypothetical contract cheating cluster investigation example: Identifying cheating at scale

    Banner with the text: Academic integrity toolkit: Case study

    Authors and institution: Anonymous

    Focus area: Identifying the case

    A lecturer of an elective subject with 120 enrolled students noticed that around a dozen students were submitting identical or near-identical answer patterns to the weekly short-answer question tasks in Moodle. When the lecturer reviewed the times that the students finished the questions, they also noticed that they were often completed very close together in time (for example, within minutes of one another), or at unusual hours (for example, 2am). The lecturer became concerned that this may indicate that students were either colluding, or potentially that a third-party was carrying out work for multiple students. The lecturer alerted their Faculty Academic Integrity Officer, who shared their concerns, and referred the matter to the Central Integrity Team (CIT) for further review.

    The central integrity office investigation

    CIT obtained the Moodle logs for the elective subject and ran an analysis to look for shared internet protocol (IP) addresses among the students in the subject. This is something that is often observed where students have colluded or where third parties have carried out work for multiple students. CIT identified that 66 students shared non-campus IP addresses and that one of these IP addresses often carried out the weekly assessment task for multiple students one after another.

    IP address analysis identified that a majority were VPN connections, however, among the VPN activity the team also observed occasional non-VPN connections originating from Kenya. As Kenya is known as a contract cheating hotspot, this gave CIT cause to suspect that the VPN connections were being operated by one or more individuals in Kenya and that the students were therefore likely to have engaged in contract cheating.

    Contract cheating research shows that students who have engaged in contract cheating have often done so multiple times. Consequently, CIT expanded its investigation to include every subject the 66 students had participated in. CIT built a case for the Misconduct Committee to consider using the following evidence:

    • Shared IP addresses connected to assessments for multiple students on the same dates, including quizzes conducted from the same IP address in sequence, one after another.
    • Activity from contract cheating hotspots, such as Kenya or Pakistan, where it was contextually unusual for the student to be based (i.e. the student was located in Australia).
    • Impossible location changes in the Moodle logs based on IP address analysis.
    • Highly inconsistent document metadata, obtained from Turnitin.
    • Engagement data that showed the students had often had very low engagement with subjects, and that engagement was highly focused on assessment.

    Misconduct Committee finding

    The Misconduct Committee found that the 66 students had engaged in contract cheating in the module of interest, and in additional subjects (an average of 10 subjects per student).

    Key lessons or points for implementation

    • Learner management systems hold valuable information that can be used to identify contract cheating at scale.
    • Trained investigators can conduct deeper analyses of the concerns an academic may be having about anomalous student behaviour.
    • Students who have been found to have engaged in contract cheating once are more likely to have engaged in this practice multiple times, and it is worthwhile to expand your search beyond the one case.
    Last updated:
  • Assessment security: Understanding the risks

    Assessment security refers to the “measures taken to harden assessments against attempts to cheat” (Dawson, 2021, p. 2). It encompasses strategies and design choices intended to protect the integrity of assessment tasks by ensuring that the work submitted is:

    • the student's own
    • completed under the conditions intended
    • free from unauthorised assistance — including the use of generative artificial intelligence (gen AI) and contract cheating services.

    Assessment security is distinct from institutional activities designed to highlight the value of academic integrity and otherwise deter undesirable behaviours.

    The importance of assessment security extends beyond individual courses or student academic performance. It underpins the social license of educational institutions — the public’s trust that qualifications represent genuine achievement and that graduates are competent in their fields. Without credible assessments, the value of a credential is diminished, damaging both institutional reputation and societal confidence in higher education outcomes.

    It is increasingly the case that institutions are adopting a program-level view of assessments, rather than limiting their view to each individual subject. Approaches such as Programmatic Assessment, the “2-lane” approach, and others seek to achieve a balance between institutional logistics, assessment for learning and assessment of learning.

    It is generally advisable that assessments with low to medium security (see table below) do not carry a high weighting in terms of marks or progression to the next stage of a student’s program, but rather are used principally as formative assessment rather than as summative.  In this context, accurately evaluating the security level of various assessment types is important.

    The table below categorises common assessment formats by their relative vulnerability to academic misconduct.

    Assessment types and security levels

    Assessment type Assessment security level Explanation
    In-person supervised written exams High Conducted under invigilation; low risk of third-party help or collusion. No LMS involvement. Seating arrangements can reduce potential peer signaling.
    Oral exams / viva voce High Real-time interaction with assessors; impersonation is very difficult. Not vulnerable to LMS threats.
    In-class written tasks High Live conditions reduce risk of collusion or external help. Minimal LMS exposure, although digital in-class tasks can be conducted by third-parties.
    Simulations / role plays (in-person) High Collusion is difficult due to live, interactive nature. Tasks typically require spontaneous performance. 
    Viva voces High Live conditions reduce risk of collusion or external help. Unlike presentations, all content or answers cannot be pre-prepared. 
    Practical / lab-based assessments Medium to high Performance-based tasks limit outsourcing. But collusion can occur via shared work or peer support unless roles are clearly defined.
    Presentations (in-person) Medium to high Harder to collude during delivery, but prep materials (slides, scripts) can be developed by others. 
    Online proctored exams Medium LMS credentials can be shared with third-parties; proctoring may not detect collusion (e.g. messaging with peers during exams).
    Group projects Low to medium Intended collaboration can mask inappropriate collusion. One student may do all work or external help can be used. LMS tools may hide individual contributions. Students often have a clear understanding of what work their group members have undertaken. 
    Presentations (recorded or online) Low to medium Higher collusion risk — peers may co-develop or edit content. Scripts or full videos can be produced externally and uploaded, increasing risk of deepfakes.
    Peer review tasks Low to medium Students may coordinate reviews with friends, give favourable feedback, or copy others’ responses. Online platforms enable manipulation.
    Take-home exams (time-limited) Low Students can collaborate informally or share answers. LMS access can be shared to allow real-time assistance.
    Online quizzes (untimed/open-book) Low High collusion risk — students may complete quizzes together or share answers. Easy to outsource via shared LMS access.
    Essays / research papers Low High risk of contract cheating and peer collaboration. Students may exchange drafts or copy structure/arguments. LMS submission portals may be accessed by others.
    Portfolios / reflective journals Low Often completed individually, but prompts and reflections can be shared or copied between peers. LMS access may be used to upload third-party or peer-written content.
    Discussion board posts / participation Low Very high risk of collusion — students often copy or paraphrase each other’s posts. LMS accounts may be shared with others to post on behalf of students.
    Capstone projects / theses Low Students may collaborate inappropriately on research or writing. Risk of peer editing or contract cheating. Third-party LMS access may be used to submit externally produced work.

    Enhancing assessment security

    Various actions can be taken to enhance assessment security by making academic misconduct harder to engage in or easier to detect. Research into academic misconduct shows that some strategies that academics intuitively believe may enhance assessment security may not work, while other strategies are more effective. At the same time, avoiding low security assessments is recommended.

    Dawson (2021) noted three low-security assessments that are easily avoidable “assessment design mistakes”:

    1. summative unsupervised online tests and quizzes
    2. recycled assignments from previous teaching periods
    3. take-home assignments with a single correct answer.

    It has been suggested that constraining students’ time to work on assignments may make it harder for them to engage in contract cheating because it may be difficult to find someone to complete the assignment at short notice. Evidence shows, however, that making turnaround times shorter with the aim of limiting students’ ability to outsource assignments makes cheating more likely.

    Surveys of students indicate that they are more likely to outsource assignments when under time pressure (Bretag et al., 2019) and analysis of contract cheating providers shows repeated claims or producing assignments at short notice  (Wallace & Newton, 2014). These short turnaround times include rapidly providing answers to questions posted in online quizzes (Lancaster & Cotarlan, 2021).

    More viable options to enhance assessment security include:

    • using text-matching software to aid in detecting plagiarism
    • training markers to recognise signs of academic misconduct
    • monitoring file-sharing sites for uploaded course information and assessments
    • using platforms that monitor students’ access to assessments and record version histories of their work
    • monitoring students’ engagement with learning management systems
    • training invigilators of exams, and academics who supervise in-class tests, to recognise and respond appropriately to academic misconduct.

    References

    Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., ... & Van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856.

    Dawson, P. (2020). Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (1st ed.). Routledge.

    Lancaster, T., & Cotarlan, C. (2021). Contract cheating by STEM students through a file sharing website: a Covid-19 pandemic perspective. International Journal for Educational Integrity, 17(1), 3.

    Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478–485.

    The “2-lane” approach, Adam Bridgeman, Danny Liu and Ruth Weeks, University of Sydney.

    Wallace, M. J., & Newton, P. M. (2014). Turnaround time and market capacity in contract cheating. Educational Studies, 40(2), 233-236. 
     

    Last updated:
  • Detecting plagiarism of AI-generated text in student assessments and securing take-home written assessments

    Guy Curtis, University of Western Australian

    Since the release of ChatGPT in November 2022, a major concern for many academics has been students copying and pasting text produced by generative artificial intelligence (gen AI) programs into their assignments without acknowledgment. Such unacknowledged copying and pasting meets the traditional definition of plagiarism and is a case of academic misconduct.

    Substantiating cases of academic misconduct requires proving on the balance of probabilities that misconduct has occurred. This means that the evidence shows that misconduct is more likely to have occurred than not. A detected case is one that meets this standard of proof and is not overturned on appeal (Ellis et al., 2020). Finding sufficient evidence to prove plagiarism from gen AI is more challenging than substantiating plagiarism from published sources.

    In general, there is a strong case that substantive and systematic assessment redesign is needed in the age of gen AI (Corbin et al., 2025). In particular, highly secure assessments should be used to assess or verify key learning outcomes at a program level. In so doing, excellent guidance can be found in the University of Sydney’s Two-lane approach where assessments in lane one are highly resourced and secure and would occur at key points in a course (or unit) to gain assurances of student learning outcomes  and assessments to facilitate learning which are not as highly resourced or secure would be in  the more open lane 2  (Bridgeman, Liu, & Weeks, 2024; Liu & Bridgeman, 2023). Using artificial intelligence tools responsibly in your studies and assessments places take-home written assessments, which would typically be a concern for instances of plagiarism, in the “open” (lane 2) category, where gen AI use is permitted but must be acknowledged.

    In applying the two-lane approach to a written assessment, it is still necessary to detect instances of plagiarism in the form of unacknowledged inclusion of gen AI content. In addition, it has been argued that for educational reasons, in limited circumstances, educators may need to restrict the use of gen AI in some written assessments that are not completed under closely supervised in-class conditions (Curtis, 2025). Because of this, some capacity to detect plagiarism from gen AI is needed.

    Given that assessment security involves both making it more difficult to engage in misconduct, and easier to detect misconduct, an important consideration is whether take-home written assessments can be made more secure.

    Securing take-home written assessments

    Pre-gen AI, a typical take-home written assessment, such as an essay, would be completed by a student in their own time on their own device and they would only submit a completed piece of work, such as a Word or PDF document.  Although text-matching software provides security for such work against traditional copy-paste plagiarism, such assignments have always been relatively low in assessment security and vulnerable to academic misconduct such as contract cheating. They are particularly insecure when educators recycle assignment topics year after year.

    Some measures have been suggested that can be put in place to make academic misconduct, such as contract cheating and copying and pasting from gen AI, easier to detect in take-home written assignments. As well as improving ease of detection, such barriers to academic misconduct may also dissuade students from attempting to breach assessment rules, such as not acknowledging the inclusion of content pasted from gen AI, because the ability to detect such actions is more obvious.

    Strategy 1

    To improve security of take-home written assessments, students can be required to maintain and submit a verifiable version history of their work (e.g., Berukov, 2025). Using technologies such as Google Docs , Microsoft 365, or Overleaf, students may be able to record and provide evidence of their process of compiling a take-home written assessment.

    Strategy 2

    Instruct students to work within programs, or with programs, that are designed to track the writing process. Commercial programs such as Cadmus, Inktrail, Turnitin Clarity, and Grammarly Authorship, use functions such as recording when content is pasted into the writing platform and regularly auto-saving work such that the process of writing may be effectively “replayed”. These programmes may have the added benefit of tracking important data that can be used to identify instances of contract cheating, such as login times, durations and IP addresses.

    Using techniques such as monitoring version history and writing-in platforms provides educators with an opportunity to give students feedback on their process of writing an assessment, not just feedback on the final product.

    Securing take-home written assessments is a first-line defence against unacknowledged plagiarism from gen AI. Nevertheless, further options must be considered in how to detect plagiarism from gen AI when such security measures are used, and when they are not.

    Gen AI detection tools

    Since the early 2000s academics have relied on technological support to detect plagiarism in the form of text-matching software. However, while text-matching software links text to verifiable published sources and other students’ assignments, text produced by gen AI tools is not stored or published and therefore cannot be linked to text in a student’s assignment.

    In response to this problem, there have been various “gen AI detector” programs developed that attempt to estimate whether text was produced by gen AI. Such “gen AI detectors” examine linguistic and structural characteristics, including perplexity, burstiness and sentence structure, comparing them against patterns observed in both human and AI-generated text. This analysis leads to a probability estimate that text was AI-generated. However, people can display gen AI-style characteristics in their writing and gen AI tools can include “humanise” features or add-ons.

    As a consequence gen AI detector programs can at times falsely indicate that human-written text was AI-generated. Such false positives are highly problematic in the context of investigating plagiarism from gen AI and can create a high stress situation for students who have been false accused of misconduct. As a result, institutions should use such detection tools with caution.

    Current evidence for the accuracy of gen AI detector programs is mixed. These programs can reasonably distinguish 100% human-written and 100% gen AI-written text but are much less reliable when gen AI text is edited by a human, mixed with human-produced writing or documents are short (e.g. less than 300 words) (Weber-Wulff et al., 2023). Additionly, most detection programs can currently be bypassed by gen AI add-ons that “humanise”.

    Issues to consider when using gen AI detection tools to identify instances of academic misconduct:

    • The “AI score” alone is insufficient to bring an allegation of misconduct. Additional evidence is required to make an allegation of gen AI misuse.
      • low gen AI scores may also indicate gen AI-written text where an additional step has been taken to humanise the text. Again, any score, either high or low, is insufficient evidence by itself to allege misconduct
    • “Humanisation” add-ons can bypass gen AI detectors.
    • A score on a gen AI detector program is not the probability that the assignment was AI-generated. For example, if a detector has a 1% false-positive rate, it will flag 1 assignment in 100 as having a high score (e.g., 80-90%). If no students in a class of 100 used gen AI, one assignment will have a score of say 80-90% but the real probability that this assignment was AI-generated is zero.
    • Unlicensed gen AI-detector program use that is free or via a personal subscription to a third-party platform may be a breach of your IT policy, privacy rules, intellectual property rules or copyright.
    • To mitigate the risk of confirmation biases educators and investigators should look for evidence that disconfirms gen AI use in addition to evidence that may confirm gen AI use for assignments that have been flagged for gen AI content.

    Clear signals of gen AI use in written assessments

    • Obvious indicators of gen AI use that have unintentionally been pasted directly into an assessment such as,
      • “Certainly, I can give you an answer….”
      • “As a large language model…”
      • prompts used by students included with the text pasted into their assignment etc.
    • Inability of the student to answer questions about the assignment content, e.g. post-assignment viva.
    • Admission by student of unacknowledged use of gen AI.

    Possible signals of genAI use in written assessments

    • Disparity in student’s skill level — a mismatch is evident between the skill demonstrated in class and between assessments (e.g. supervised vs unsupervised, written vs oral). This may raise suspicions of other forms of misconduct, such as contract cheating.
    • Made-up (mashed-up) references — a reference that does not match another source in a text-matching program is a potential clue that the reference is fabricated. A mashed-up reference may be highlighted by text-matching software with different sources matching the title and journal, for example. Fabricated references are typically academic misconduct in and of themselves and may constitute a breach of academic integrity without any need to prove that they occurred because of the use of gen AI.
    • Perfectly written, mistake-free submissions—perfectly written, quickly produced submission may be a signal of misconduct (see Word document properties, information on copy/paste chips in write-in programs such as Cadmus or Inktrail, and/or the time taken to write or LMS metrics). It is important to remember that perfectly written text is not in itself a concern and may simply indicate good writing, permissible automated grammar checks and gen AI editorial assistance.
    • Awkward, inappropriate or unusually sophisticated word-choices, verbosity — waffle may be a stylistic clue that indicates the use of a paraphrasing tool or gen AI use.
    • Uniformly written responses — a lack of critical analysis that misses the point or fails to include key sources can be a signal of gen AI use.
    • Responses based on the title of the work — questions or summaries of sources appear to address key words in the title and not the content of the work.
    • Assignments that are produced quickly — assignments completed in extremely short time (see Word document properties for editing time or information on copy/paste chips and/or the time taken to write, or LMS metrics such as login times or time spent to answer a question).
    • Text volume lacking edits — a large volume of text produced quickly with no or minimal edits (see Word document properties or information on copy/paste and/or the time taken to write, or LMS metrics).
    • Lack of editing or evidence of writing process — text pasted into a document rather than typed (see Word document metadata [RSID codes] or information on copy/paste chips).
    • Assignment structure — answers or assignment content are mainly written as bullet points or numbered lists.
    • Whistleblowers — whistleblowers can be helpful in raising concerns about academic misconduct, their allegations must be independently verified with other evidence as it is possible for allegations to be malicious.

    References

    Bridgeman, A., Liu, D., & Weeks, R. (2024). Program level assessment design and the two-lane approach

    Berukov, N. (2025). Version control: how I combat the rise of generative AI in the classroom. Nature.

    Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education.

    Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research & Development

    Ellis, C., van Haeringen, K., & House, D. (2020a). Technology, policy and research: Establishing evidentiary standards for managing contract cheating cases. In T. Bretag (Ed.) A research agenda for academic integrity (pp. 138-151). Edward Elgar.

    Liu, D., & Bridgeman, A. (2023). What to do about assessments if we can’t out-design or out-run AI?

    Pitt, P., Dullaghan, K., & Sutherland-Smith, W. (2021). ‘Mess, stress and trauma’: Students’ experiences of formal contract cheating processes. Assessment & Evaluation in Higher Education, 46(4), 659-672. 

    Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., ... & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(26).

    Last updated:
  • Don’t be sorry, just declare it: Promoting academic integrity and securing the essay in the age of gen AI

    Banner with the text: Academic integrity toolkit: Case study

    Author: Benito Cao, The University of Adelaide

    Focus area: Making academic integrity visible

    The sudden irruption of generative artificial intelligence (gen AI) in higher education has sparked widespread concerns regarding the viability of essays as a form of assessment. Indeed, the argument often goes that large language models (LLMs) such as ChatGPT signal the death of the essay. But do they? Will they? Or, to paraphrase Mark Twain, is it that reports of the essay’s death are greatly exaggerated?

    This case study outlines a pedagogical initiative designed to promote academic integrity and secure the essay in the age of gen AI. The initiative reimagines the ‘two-lane’ approach that proposes the combination of secured assessments (lane 1) with unsecured open assessments (lane 2).

    The reimagined approach resembles the two-lane model in its partial reliance on in person assessments to validate student learning. Yet, it challenges the unrestricted nature of the Lane 2 approach by illustrating the value and validity of a ‘middle lane’ approach which focusses on the ecosystem to foster and facilitate authentic learning (Curtis 2025). The initiative relies on a pedagogical ecosystem designed to develop trust between students and teachers notwithstanding that academic integrity requires that we ‘trust but verify’ in cases of potential academic misconduct.

    The pedagogical ecosystem includes the following elements:

    1. an exploration of the potential for gen AI tools to fabricate information, with illustrations of real-world ‘hallucinations’
    2. the provision of clear guidelines, with references to university policies and industry standards to showcase the rationale and relevance of the guidelines
    3. the requirement to include a gen AI appendix when students use gen AI in the production of their essays
    4. a reminder that students are expected to fully understand every aspect of their essay, and that if there is a concern about the use of gen AI tools exceeding the assessment guidelines, they may be asked to discuss the assignment before the mark is finalised
    5. explicit advice to keep drafts, notes, annotated readings and any other materials students have used, as evidence of how their essay has been produced in case its authorship is questioned
    6. the reliance on secured (in-person) assessments, worth between 30% and 50% of the overall mark, to help validate student learning and to compare with the essay preliminary mark if there are academic concerns regarding the production of the essay.

    This pedagogical ecosystem is designed to enable the (relatively secured) implementation of a ‘middle lane’ approach, that permits a limited use of gen AI. Specifically, students are allowed to use gen AI tools to assist with idea generation and language expression. For example, I tell students:

    • if they struggle to come up with ideas for their essays, they can use gen AI but any ideas suggested by the tool must be validated
    • while they can use gen AI to assist with language expression, they should not allow the tool to take control of the narrative, that the narrative should reflect their own voice.

    In essence, students are allowed a limited use of gen AI but are expected to remain the authors of their essays and to be transparent regarding their use of gen AI tools. Students are reminded of this basic expectation of transparency in the assignment submission portal. This is the last thing they read before uploading their essay:

    Don't forget to include a gen AI appendix if you have used gen AI tools (for example, ChatGPT, Copilot, Gemini, Claude, Grammarly, etc.)  in the production of the essay. The absence of this appendix is equivalent to stating: I did not use GenAI. If this statement turns out to be false, this would constitute a breach of academic integrity. Remember the slogan: Don't be sorry, just declare it.

    The approach which I have titled, Don't be sorry, just declare it, reflects the integration of four normative principles: caution, trust, relevance and transparency  (Cao 2025). It is a slogan used by Australian Customs and Biosecurity warning people who arrive in Australia to declare all goods they might not be permitted to bring into the country, rather than apologise afterwards for the lack of a declaration.

    The evidence suggests that this approach can go a long way in addressing some of the most urgent pedagogical challenges posed by gen AI, particularly concerns with academic integrity. The evidence also suggests that this approach can improve security of the essay and thus contribute to its preservation as a valuable form of assessment in the age of gen AI.

    References

    Cao, B. (2025). Don’t Be Sorry, Just Declare It: Pedagogical Principles for the Ethical Use of ChatGPT, Master Bullshit Artist of Our Time. In: 11th International Conference on Higher Education Advances (HEAd’25). Valencia, 17-20 June 2025.

    Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research and Development. (Published online: 18 March 2025).

    Last updated:
  • Principles for criteria and standards in assessment for gen AI use

    Banner with the text: Academic integrity toolkit: Case study

    Author: Dom McGrath, The University of Queensland

    Focus area: Assessment design

    Advancements in generative artificial intelligence (gen AI) capabilities and our responses are changing assessment practices. Where gen AI use is permitted in assessment, teaching staff are grappling with how to redesign these tasks to ensure they remain valid measurements of learning outcomes.  At the University of Queensland (UQ), we have developed principles to support the design of criteria and standards to support assessment practices where students may use gen AI (see below).

    Adapting rubrics in assessment where AI may be used: principles and implications for practice

    The following principles and examples have been developed to support UQ staff designing their open assessment, assessment where AI use is permitted. At UQ where there is no The principles are general advice to support design, not a policy position that must be followed. This advice has been developed in response to questions from UQ staff and students with input from the Transforming Assessment Team and the broader UQ Learning Design Community.

    Focus on the intended learning, not on catching cheating

    Principle: Criteria and standards should speak to the learning the task is designed to evidence.

    The availability of AI increases the need for clarity of learning intended to be assessed. Criteria and standards should be fair and transparently related to the Learning Outcomes of the course. Adding descriptors aimed at spotting misconduct confuses students and markers and rarely works. Instead, make explicit what learning must be demonstrated and how quality will be judged.

    Implications for practice

    • Start with verbs in the Learning Outcome – consider using them in the criterion stem (e.g., “analyse…”, “design…”).
    • Strip out “gotcha” language – no “demonstrates originality” or “work is human‑generated”.
    • Remind markers that suspicion ≠ evidence; direct them to assess with the standard descriptors.

    Plan a progression of AI expectations across courses (within programs and plans)

    Principle: Map how AI use, acknowledgement and rubric language mature across courses.

    Students’ learning experience is in multiple courses within and across semesters. Planning AI expectations and rubrics across plans and programs enhances students’ experience and reduces confusion by providing integrated guidance and expectations. Program and plan convenors may be well placed to lead work developing coherent plans for AI expectations.

    Implications for practice

    • Talk with colleagues teaching courses before, alongside, and after yours – consider similarities and differences in what is asked of students.
    • Talk with your students about expectations in your course and their other courses.

    Assess how AI is acknowledged, not what AI produced

    Principle: The content of AI Acknowledgements use should not impact marks; however, the inclusion and appropriate styling of the acknowledgment may be assessed.

    We cannot reliably verify every AI interaction, so we should incentivise honest, transparent reporting rather than punishments that could drive concealment. Providing students with clear guidance for acknowledgement that is not onerous will support responsible academic practices around transparency in AI use.

    Implications for practice

    • Where appropriate include acknowledgement as part of a criteria (e.g. alongside formatting, referencing styles, or other requirements).
    • Make acknowledgement guidance clear and as simple as possible including exemplars and guided practice.

    Assess (responsible) AI use when it is an outcome

    Principle: Where responsible AI engagement is explicitly listed in the learning outcomes, AI use can be required and included in rubric descriptors (e.g., defensibly selects model, uses effective prompts, evaluates and appropriately uses outputs).

    Principle: Where students have a choice to use AI in assessment, their choice to use AI should not impact how their work is assessed.

    Responsible AI use and ethics should be assessed when it is an explicit learning outcome. Across our programs we should be identifying multiple points where we teach and assess responsible disciplinary use of AI. Some level of secure assessment may be required to have confidence in how students are using AI.

    While we recognise the quality of a students’ work may be impacted by their use of AI, if we cannot reliably identify what students have done with AI we should not be using it as a basis of assessment. We cannot differentiate criteria and standards based on students’ declared AI use.

    Implications for practice

    • Where AI use is a Learning Outcome, clearly identify where and how it is assessed.
    • AI use can be recommended in any task but only required where AI use is a Learning Outcome.
    • Where AI use is not assessed, grade the output only; ignore whether AI was used.

    Provide equitable access—and where feasible an opt‑out to AI

    Principle: If a learning outcome requires AI, all students must have practical access and may be required to use it; where AI is optional, an equivalent non‑AI pathway should exist.

    Where AI is included in a course Learning Outcome, students must have suitable access to AI tools and may be required to use AI in assessment. Where AI is not included in a course Learning Outcome, students may be requested to use AI but a suitable alternative should be available to enable students to abstain from AI use.

    Implications for practice

    • Ensure students have suitable access to AI tools and communicate which tools are recommended.
    • Where AI is not assessed but recommended provide an alternative pathway: e.g. allow manual steps (e.g., hand‑sketch a design) with same criteria.
    • Ensure expectations are clearly communicated to students for example include statements like: “Students may choose not to use AI; all criteria can be met without it.” in the course site and assessment documents.

    Reduce weighting or assessment of offload-able activities (grammar, etc)

    Principle: Lower the weighting of activities that AI can automate; in many cases this includes grammar, spelling or basic graphic layout, unless they are core to the learning outcome.

    A growing range of activities can be offloaded to AI, in many assessments we require students to engage with these activities, but they are not related to the purpose of the assessment. For example, in many written tasks: grammar, spelling and written expression are required to be effective but are not the learning outcomes assessed. We should expect a higher standard in these areas for students to pass, but these criteria should not be the deciding factors if a students’ work should be recognised with a mark between a 6 or a 7.

    Implications for practice

    • Have clarity of the key learning outcomes students must acquire to focus attention and support in key areas that cannot be compromised.
    • Free up time to provided targeted support and guidance.

    Staff need to have current knowledge of AI and access to AI tools

    Principle: Staff designing and marking assessment must understand AI affordances and limitations and regularly review rubrics to ensure criteria remain fit for purpose.

    Implications for practice

    • Review your assessment and rubrics each semester – consider adding a standing agenda item to course review meetings.
    • Moderation checklist – how is the assessment being impacted by AI?
    Last updated: