• Compliance monitoring approach

    Australian higher education providers must meet a range of obligations under the Higher Education Standards Framework (Threshold Standards) 2021 (HES Framework).

    Our compliance monitoring approach supports us to identify any current or emerging risks of non-compliance with these obligations and target them proactively.

    This work is informed by our Compliance Monitoring Framework, provider risk assessments and other information sources.

    TEQSA uses a prioritisation model to identify risks and allocate resources. This includes setting annual compliance priorities to focus our work.

    In the event we identify non-compliance with the HES Framework, our response is guided by our Compliance and Enforcement policy.

    We also publish an annual compliance report that outlines our priorities and updates the sector on progress from the previous year.

    This report also includes guidance for higher education providers to help them meet obligations.

    Further information

    Last updated:
  • Risk Assessment Framework consultation: Summary report

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type
  • Risk ratings: Examples of relevant context and provider controls that TEQSA considers

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type
  • Core Plus model for regulatory assessments policy

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type

    Related links

  • Other resources

    Banner with the text: Academic integrity toolkit - Other resources

    This section of the toolkit brings together a wide range of materials to support academic integrity from multiple perspectives. It provides guides, case studies and resources for educators and institutions seeking deeper insights and actionable strategies to strengthen integrity practices.

    Videos

    Summary video

    Academic integrity in Australian higher education – a national priority: Workshop video

    Academic integrity in Australian higher education – a national priority: 2025 update

    Slides
    Case studies
    Academic integrity organisations
    Translated resources
    Guides
    Last updated:
  • Academic integrity policy and procedure

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Dr Amy Milka and Amanda Janssen, Adelaide University

    Focus area: Developing and benchmarking policies and procedures

    The merger of the University of Adelaide and University of South Australia presented a unique opportunity to shape academic integrity policy and procedure for a new institution. In approaching this task, we leveraged the mature policies of both institutions and benchmarked emerging approaches from other leaders in this space.

    The resulting Adelaide University policy and procedure1 adopted some tried and tested approaches of the foundation universities, including different levels of committees, decision-making and resourcing for issues of different severity, and more recent innovations such as an ‘early resolution’ which offers a quicker, educative resolution in certain cases. Looking across the sector, we found that leaders in academic integrity were moving towards publishing matrices and clear rubrics for misconduct outcomes to ensure transparency and consistency.2

    Student and stakeholder consultation and feedback identified key features of the policies and procedures which were important to learners. These included:

    • clear and informative definitions of different types of misconduct
    • transparency about possible outcomes
    • efficient processes and timelines to allow student input and minimise impacts on academic progress and student wellbeing.

    Students were involved in co-creating and providing feedback on communications about the policy and procedure, including the letters sent during misconduct investigations.

    A unique challenge for our merged institution is the communication of changes to the policy and procedure to transitioning staff and student cohorts, who have awareness of historic policies and approaches. We have carefully considered this challenge in developing our academic integrity messaging.

    Clear and timely communication on policy changes is crucial to ensuring a shared culture of integrity as well as minimising misconduct issues based on lack of awareness or understanding.

    Building awareness of policies, practices and expectations is a cornerstone of academic integrity work at any institution to ensure a shared understanding among students and staff with different educational and institutional backgrounds, and different approaches to integrity. Successful policy implementation requires co-creation, visibility and clarity in the institutional message.

    Key lessons or points for implementation

    • Policies and procedures need to balance a range of competing considerations, namely transparency, fairness, efficiency, student experience and an educative approach (see image below).
    • Policies and procedures need to balance strategic goals with operational effectiveness, and consider issues such as workloads, systems and processes in procedural design.
    • An agile academic integrity policy and procedure requires a regular schedule for review.

    Image of considerations of academic integrity policy and procedure

    Notes

    1. Academic integrity policy
    2. For example, Deakin University and the University of Southern Queensland.
    Last updated:
  • Gen AI policy evolution at Southern Cross University

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Professor Ruth Greenaway, Dr Zachery Quince, Southern Cross University

    Focus area: Governance

    Southern Cross University (SCU) took a first principles approach to policy development, supporting a strategic goal of ubiquitous gen AI use and positioning gen AI as an educational tool. An initial, binary model, where academics either permitted or prohibited gen AI use, overlooked disciplinary needs, causing confusion for staff and students, and limiting meaningful engagement.

    Seeking greater inclusivity and flexibility, SCU transitioned to a five-tier gen AI model, informed by the AI Assessment Scale and supporting the assessment principles of the Southern Cross Model. It mapped a continuum from prohibiting use to open collaboration, specifying permissible uses. The model, though pedagogically robust, proved complex in practice, presenting challenges to staff adoption and consistent implementation. In 2025, SCU introduced the Gen AI Tool Use Descriptors, a pragmatic three-level model of assessment security levels. Assessments now explicitly indicate their gen AI stance at Level 1, 2 or 3.

    This approach is designed to normalise gen AI as part of academic practice while promoting accountability and meeting the learning and teaching objectives. It is embedded in formal assessment protocols, with specific gen AI guidelines available for each task, evidentiary requirements and a compulsory student declaration, fostering openness and ethical engagement.

    Implementation of the Gen AI Tool Use Descriptors is underpinned by the Gen AI Descriptor Use Staff Guidelines, which provide assessment specific scaffolding, best practice examples and clear, structured support tailored to different assessment types, enabling academics to confidently integrate gen AI tools into their teaching and evaluation processes.

    Grounded in robust research on ethical considerations and student learning behaviours, the guidelines help staff define task expectations, document gen AI use and navigate the complexities of balancing gen AI’s benefits and risks. These measures strengthen academic integrity by promoting ethical engagement with gen AI and fostering a culture of transparency, consistency and accountability.

    Key lessons or points for implementation

    • Establish a structured approach introducing models of gen AI use with clear guidelines for staff and students.
    • Adopt proactive educative strategies that provide comprehensive resources, and examples to support both staff and students, to ensure confidence and clarity in implementation.
    • Encourage a culture of ongoing adaptation in response to gen AI advancements and evolving industry practices.

    A list of 3 gen AI descriptors

    References

    • Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(06)
    • Quince, Z., & Nikolic, S. (2025). Student identification of the social, economic and environmental implications of using Generative Artificial Intelligence (GenAI): Identifying student ethical awareness of ChatGPT from a scaffolded multi-stage assessment. European Journal of Engineering Education. Advance online publication.
    Last updated:
  • Hypothetical contract cheating cluster investigation example: Identifying cheating at scale

    Banner with the text: Academic integrity toolkit: Case study

    Authors and institution: Anonymous

    Focus area: Identifying the case

    A lecturer of an elective subject with 120 enrolled students noticed that around a dozen students were submitting identical or near-identical answer patterns to the weekly short-answer question tasks in Moodle. When the lecturer reviewed the times that the students finished the questions, they also noticed that they were often completed very close together in time (for example, within minutes of one another), or at unusual hours (for example, 2am). The lecturer became concerned that this may indicate that students were either colluding, or potentially that a third-party was carrying out work for multiple students. The lecturer alerted their Faculty Academic Integrity Officer, who shared their concerns, and referred the matter to the Central Integrity Team (CIT) for further review.

    The central integrity office investigation

    CIT obtained the Moodle logs for the elective subject and ran an analysis to look for shared internet protocol (IP) addresses among the students in the subject. This is something that is often observed where students have colluded or where third parties have carried out work for multiple students. CIT identified that 66 students shared non-campus IP addresses and that one of these IP addresses often carried out the weekly assessment task for multiple students one after another.

    IP address analysis identified that a majority were VPN connections, however, among the VPN activity the team also observed occasional non-VPN connections originating from Kenya. As Kenya is known as a contract cheating hotspot, this gave CIT cause to suspect that the VPN connections were being operated by one or more individuals in Kenya and that the students were therefore likely to have engaged in contract cheating.

    Contract cheating research shows that students who have engaged in contract cheating have often done so multiple times. Consequently, CIT expanded its investigation to include every subject the 66 students had participated in. CIT built a case for the Misconduct Committee to consider using the following evidence:

    • Shared IP addresses connected to assessments for multiple students on the same dates, including quizzes conducted from the same IP address in sequence, one after another.
    • Activity from contract cheating hotspots, such as Kenya or Pakistan, where it was contextually unusual for the student to be based (i.e. the student was located in Australia).
    • Impossible location changes in the Moodle logs based on IP address analysis.
    • Highly inconsistent document metadata, obtained from Turnitin.
    • Engagement data that showed the students had often had very low engagement with subjects, and that engagement was highly focused on assessment.

    Misconduct Committee finding

    The Misconduct Committee found that the 66 students had engaged in contract cheating in the module of interest, and in additional subjects (an average of 10 subjects per student).

    Key lessons or points for implementation

    • Learner management systems hold valuable information that can be used to identify contract cheating at scale.
    • Trained investigators can conduct deeper analyses of the concerns an academic may be having about anomalous student behaviour.
    • Students who have been found to have engaged in contract cheating once are more likely to have engaged in this practice multiple times, and it is worthwhile to expand your search beyond the one case.
    Last updated:
  • Assessment security: Understanding the risks

    Assessment security refers to the “measures taken to harden assessments against attempts to cheat” (Dawson, 2021, p. 2). It encompasses strategies and design choices intended to protect the integrity of assessment tasks by ensuring that the work submitted is:

    • the student's own
    • completed under the conditions intended
    • free from unauthorised assistance — including the use of generative artificial intelligence (gen AI) and contract cheating services.

    Assessment security is distinct from institutional activities designed to highlight the value of academic integrity and otherwise deter undesirable behaviours.

    The importance of assessment security extends beyond individual courses or student academic performance. It underpins the social license of educational institutions — the public’s trust that qualifications represent genuine achievement and that graduates are competent in their fields. Without credible assessments, the value of a credential is diminished, damaging both institutional reputation and societal confidence in higher education outcomes.

    It is increasingly the case that institutions are adopting a program-level view of assessments, rather than limiting their view to each individual subject. Approaches such as Programmatic Assessment, the “2-lane” approach, and others seek to achieve a balance between institutional logistics, assessment for learning and assessment of learning.

    It is generally advisable that assessments with low to medium security (see table below) do not carry a high weighting in terms of marks or progression to the next stage of a student’s program, but rather are used principally as formative assessment rather than as summative.  In this context, accurately evaluating the security level of various assessment types is important.

    The table below categorises common assessment formats by their relative vulnerability to academic misconduct.

    Assessment types and security levels

    Assessment type Assessment security level Explanation
    In-person supervised written exams High Conducted under invigilation; low risk of third-party help or collusion. No LMS involvement. Seating arrangements can reduce potential peer signaling.
    Oral exams / viva voce High Real-time interaction with assessors; impersonation is very difficult. Not vulnerable to LMS threats.
    In-class written tasks High Live conditions reduce risk of collusion or external help. Minimal LMS exposure, although digital in-class tasks can be conducted by third-parties.
    Simulations / role plays (in-person) High Collusion is difficult due to live, interactive nature. Tasks typically require spontaneous performance. 
    Viva voces High Live conditions reduce risk of collusion or external help. Unlike presentations, all content or answers cannot be pre-prepared. 
    Practical / lab-based assessments Medium to high Performance-based tasks limit outsourcing. But collusion can occur via shared work or peer support unless roles are clearly defined.
    Presentations (in-person) Medium to high Harder to collude during delivery, but prep materials (slides, scripts) can be developed by others. 
    Online proctored exams Medium LMS credentials can be shared with third-parties; proctoring may not detect collusion (e.g. messaging with peers during exams).
    Group projects Low to medium Intended collaboration can mask inappropriate collusion. One student may do all work or external help can be used. LMS tools may hide individual contributions. Students often have a clear understanding of what work their group members have undertaken. 
    Presentations (recorded or online) Low to medium Higher collusion risk — peers may co-develop or edit content. Scripts or full videos can be produced externally and uploaded, increasing risk of deepfakes.
    Peer review tasks Low to medium Students may coordinate reviews with friends, give favourable feedback, or copy others’ responses. Online platforms enable manipulation.
    Take-home exams (time-limited) Low Students can collaborate informally or share answers. LMS access can be shared to allow real-time assistance.
    Online quizzes (untimed/open-book) Low High collusion risk — students may complete quizzes together or share answers. Easy to outsource via shared LMS access.
    Essays / research papers Low High risk of contract cheating and peer collaboration. Students may exchange drafts or copy structure/arguments. LMS submission portals may be accessed by others.
    Portfolios / reflective journals Low Often completed individually, but prompts and reflections can be shared or copied between peers. LMS access may be used to upload third-party or peer-written content.
    Discussion board posts / participation Low Very high risk of collusion — students often copy or paraphrase each other’s posts. LMS accounts may be shared with others to post on behalf of students.
    Capstone projects / theses Low Students may collaborate inappropriately on research or writing. Risk of peer editing or contract cheating. Third-party LMS access may be used to submit externally produced work.

    Enhancing assessment security

    Various actions can be taken to enhance assessment security by making academic misconduct harder to engage in or easier to detect. Research into academic misconduct shows that some strategies that academics intuitively believe may enhance assessment security may not work, while other strategies are more effective. At the same time, avoiding low security assessments is recommended.

    Dawson (2021) noted three low-security assessments that are easily avoidable “assessment design mistakes”:

    1. summative unsupervised online tests and quizzes
    2. recycled assignments from previous teaching periods
    3. take-home assignments with a single correct answer.

    It has been suggested that constraining students’ time to work on assignments may make it harder for them to engage in contract cheating because it may be difficult to find someone to complete the assignment at short notice. Evidence shows, however, that making turnaround times shorter with the aim of limiting students’ ability to outsource assignments makes cheating more likely.

    Surveys of students indicate that they are more likely to outsource assignments when under time pressure (Bretag et al., 2019) and analysis of contract cheating providers shows repeated claims or producing assignments at short notice  (Wallace & Newton, 2014). These short turnaround times include rapidly providing answers to questions posted in online quizzes (Lancaster & Cotarlan, 2021).

    More viable options to enhance assessment security include:

    • using text-matching software to aid in detecting plagiarism
    • training markers to recognise signs of academic misconduct
    • monitoring file-sharing sites for uploaded course information and assessments
    • using platforms that monitor students’ access to assessments and record version histories of their work
    • monitoring students’ engagement with learning management systems
    • training invigilators of exams, and academics who supervise in-class tests, to recognise and respond appropriately to academic misconduct.

    References

    Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., ... & Van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856.

    Dawson, P. (2020). Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (1st ed.). Routledge.

    Lancaster, T., & Cotarlan, C. (2021). Contract cheating by STEM students through a file sharing website: a Covid-19 pandemic perspective. International Journal for Educational Integrity, 17(1), 3.

    Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478–485.

    The “2-lane” approach, Adam Bridgeman, Danny Liu and Ruth Weeks, University of Sydney.

    Wallace, M. J., & Newton, P. M. (2014). Turnaround time and market capacity in contract cheating. Educational Studies, 40(2), 233-236. 
     

    Last updated: