• Risk Assessment Framework consultation: Summary report

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type
  • Risk ratings: Examples of relevant context and provider controls that TEQSA considers

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type
  • Core Plus model for regulatory assessments policy

    Body

    Consultation on TEQSA’s Regulatory Risk Framework

    Consultation on TEQSA’s Regulatory Risk Framework will be launched on 19 March 2026. 
     

    For more information on the consultation, please register to attend our TEQSA Talks webinar on Thursday 19 March from 2:00-2.45pm (AEDT).  
     

    Alternatively, please revisit this page from Friday 20 March to access the consultation paper.

    Stakeholder
    Publication type

    Related links

  • Academic integrity policy and procedure

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Dr Amy Milka and Amanda Janssen, Adelaide University

    Focus area: Developing and benchmarking policies and procedures

    The merger of the University of Adelaide and University of South Australia presented a unique opportunity to shape academic integrity policy and procedure for a new institution. In approaching this task, we leveraged the mature policies of both institutions and benchmarked emerging approaches from other leaders in this space.

    The resulting Adelaide University policy and procedure1 adopted some tried and tested approaches of the foundation universities, including different levels of committees, decision-making and resourcing for issues of different severity, and more recent innovations such as an ‘early resolution’ which offers a quicker, educative resolution in certain cases. Looking across the sector, we found that leaders in academic integrity were moving towards publishing matrices and clear rubrics for misconduct outcomes to ensure transparency and consistency.2

    Student and stakeholder consultation and feedback identified key features of the policies and procedures which were important to learners. These included:

    • clear and informative definitions of different types of misconduct
    • transparency about possible outcomes
    • efficient processes and timelines to allow student input and minimise impacts on academic progress and student wellbeing.

    Students were involved in co-creating and providing feedback on communications about the policy and procedure, including the letters sent during misconduct investigations.

    A unique challenge for our merged institution is the communication of changes to the policy and procedure to transitioning staff and student cohorts, who have awareness of historic policies and approaches. We have carefully considered this challenge in developing our academic integrity messaging.

    Clear and timely communication on policy changes is crucial to ensuring a shared culture of integrity as well as minimising misconduct issues based on lack of awareness or understanding.

    Building awareness of policies, practices and expectations is a cornerstone of academic integrity work at any institution to ensure a shared understanding among students and staff with different educational and institutional backgrounds, and different approaches to integrity. Successful policy implementation requires co-creation, visibility and clarity in the institutional message.

    Key lessons or points for implementation

    • Policies and procedures need to balance a range of competing considerations, namely transparency, fairness, efficiency, student experience and an educative approach (see image below).
    • Policies and procedures need to balance strategic goals with operational effectiveness, and consider issues such as workloads, systems and processes in procedural design.
    • An agile academic integrity policy and procedure requires a regular schedule for review.

    Image of considerations of academic integrity policy and procedure

    Notes

    1. Academic integrity policy
    2. For example, Deakin University and the University of Southern Queensland.
    Last updated:
  • Gen AI policy evolution at Southern Cross University

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Professor Ruth Greenaway, Dr Zachery Quince, Southern Cross University

    Focus area: Governance

    Southern Cross University (SCU) took a first principles approach to policy development, supporting a strategic goal of ubiquitous gen AI use and positioning gen AI as an educational tool. An initial, binary model, where academics either permitted or prohibited gen AI use, overlooked disciplinary needs, causing confusion for staff and students, and limiting meaningful engagement.

    Seeking greater inclusivity and flexibility, SCU transitioned to a five-tier gen AI model, informed by the AI Assessment Scale and supporting the assessment principles of the Southern Cross Model. It mapped a continuum from prohibiting use to open collaboration, specifying permissible uses. The model, though pedagogically robust, proved complex in practice, presenting challenges to staff adoption and consistent implementation. In 2025, SCU introduced the Gen AI Tool Use Descriptors, a pragmatic three-level model of assessment security levels. Assessments now explicitly indicate their gen AI stance at Level 1, 2 or 3.

    This approach is designed to normalise gen AI as part of academic practice while promoting accountability and meeting the learning and teaching objectives. It is embedded in formal assessment protocols, with specific gen AI guidelines available for each task, evidentiary requirements and a compulsory student declaration, fostering openness and ethical engagement.

    Implementation of the Gen AI Tool Use Descriptors is underpinned by the Gen AI Descriptor Use Staff Guidelines, which provide assessment specific scaffolding, best practice examples and clear, structured support tailored to different assessment types, enabling academics to confidently integrate gen AI tools into their teaching and evaluation processes.

    Grounded in robust research on ethical considerations and student learning behaviours, the guidelines help staff define task expectations, document gen AI use and navigate the complexities of balancing gen AI’s benefits and risks. These measures strengthen academic integrity by promoting ethical engagement with gen AI and fostering a culture of transparency, consistency and accountability.

    Key lessons or points for implementation

    • Establish a structured approach introducing models of gen AI use with clear guidelines for staff and students.
    • Adopt proactive educative strategies that provide comprehensive resources, and examples to support both staff and students, to ensure confidence and clarity in implementation.
    • Encourage a culture of ongoing adaptation in response to gen AI advancements and evolving industry practices.

    A list of 3 gen AI descriptors

    References

    • Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(06)
    • Quince, Z., & Nikolic, S. (2025). Student identification of the social, economic and environmental implications of using Generative Artificial Intelligence (GenAI): Identifying student ethical awareness of ChatGPT from a scaffolded multi-stage assessment. European Journal of Engineering Education. Advance online publication.
    Last updated:
  • Hypothetical contract cheating cluster investigation example: Identifying cheating at scale

    Banner with the text: Academic integrity toolkit: Case study

    Authors and institution: Anonymous

    Focus area: Identifying the case

    A lecturer of an elective subject with 120 enrolled students noticed that around a dozen students were submitting identical or near-identical answer patterns to the weekly short-answer question tasks in Moodle. When the lecturer reviewed the times that the students finished the questions, they also noticed that they were often completed very close together in time (for example, within minutes of one another), or at unusual hours (for example, 2am). The lecturer became concerned that this may indicate that students were either colluding, or potentially that a third-party was carrying out work for multiple students. The lecturer alerted their Faculty Academic Integrity Officer, who shared their concerns, and referred the matter to the Central Integrity Team (CIT) for further review.

    The central integrity office investigation

    CIT obtained the Moodle logs for the elective subject and ran an analysis to look for shared internet protocol (IP) addresses among the students in the subject. This is something that is often observed where students have colluded or where third parties have carried out work for multiple students. CIT identified that 66 students shared non-campus IP addresses and that one of these IP addresses often carried out the weekly assessment task for multiple students one after another.

    IP address analysis identified that a majority were VPN connections, however, among the VPN activity the team also observed occasional non-VPN connections originating from Kenya. As Kenya is known as a contract cheating hotspot, this gave CIT cause to suspect that the VPN connections were being operated by one or more individuals in Kenya and that the students were therefore likely to have engaged in contract cheating.

    Contract cheating research shows that students who have engaged in contract cheating have often done so multiple times. Consequently, CIT expanded its investigation to include every subject the 66 students had participated in. CIT built a case for the Misconduct Committee to consider using the following evidence:

    • Shared IP addresses connected to assessments for multiple students on the same dates, including quizzes conducted from the same IP address in sequence, one after another.
    • Activity from contract cheating hotspots, such as Kenya or Pakistan, where it was contextually unusual for the student to be based (i.e. the student was located in Australia).
    • Impossible location changes in the Moodle logs based on IP address analysis.
    • Highly inconsistent document metadata, obtained from Turnitin.
    • Engagement data that showed the students had often had very low engagement with subjects, and that engagement was highly focused on assessment.

    Misconduct Committee finding

    The Misconduct Committee found that the 66 students had engaged in contract cheating in the module of interest, and in additional subjects (an average of 10 subjects per student).

    Key lessons or points for implementation

    • Learner management systems hold valuable information that can be used to identify contract cheating at scale.
    • Trained investigators can conduct deeper analyses of the concerns an academic may be having about anomalous student behaviour.
    • Students who have been found to have engaged in contract cheating once are more likely to have engaged in this practice multiple times, and it is worthwhile to expand your search beyond the one case.
    Last updated:
  • Assessment security: Understanding the risks

    Assessment security refers to the “measures taken to harden assessments against attempts to cheat” (Dawson, 2021, p. 2). It encompasses strategies and design choices intended to protect the integrity of assessment tasks by ensuring that the work submitted is:

    • the student's own
    • completed under the conditions intended
    • free from unauthorised assistance — including the use of generative artificial intelligence (gen AI) and contract cheating services.

    Assessment security is distinct from institutional activities designed to highlight the value of academic integrity and otherwise deter undesirable behaviours.

    The importance of assessment security extends beyond individual courses or student academic performance. It underpins the social license of educational institutions — the public’s trust that qualifications represent genuine achievement and that graduates are competent in their fields. Without credible assessments, the value of a credential is diminished, damaging both institutional reputation and societal confidence in higher education outcomes.

    It is increasingly the case that institutions are adopting a program-level view of assessments, rather than limiting their view to each individual subject. Approaches such as Programmatic Assessment, the “2-lane” approach, and others seek to achieve a balance between institutional logistics, assessment for learning and assessment of learning.

    It is generally advisable that assessments with low to medium security (see table below) do not carry a high weighting in terms of marks or progression to the next stage of a student’s program, but rather are used principally as formative assessment rather than as summative.  In this context, accurately evaluating the security level of various assessment types is important.

    The table below categorises common assessment formats by their relative vulnerability to academic misconduct.

    Assessment types and security levels

    Assessment type Assessment security level Explanation
    In-person supervised written exams High Conducted under invigilation; low risk of third-party help or collusion. No LMS involvement. Seating arrangements can reduce potential peer signaling.
    Oral exams / viva voce High Real-time interaction with assessors; impersonation is very difficult. Not vulnerable to LMS threats.
    In-class written tasks High Live conditions reduce risk of collusion or external help. Minimal LMS exposure, although digital in-class tasks can be conducted by third-parties.
    Simulations / role plays (in-person) High Collusion is difficult due to live, interactive nature. Tasks typically require spontaneous performance. 
    Viva voces High Live conditions reduce risk of collusion or external help. Unlike presentations, all content or answers cannot be pre-prepared. 
    Practical / lab-based assessments Medium to high Performance-based tasks limit outsourcing. But collusion can occur via shared work or peer support unless roles are clearly defined.
    Presentations (in-person) Medium to high Harder to collude during delivery, but prep materials (slides, scripts) can be developed by others. 
    Online proctored exams Medium LMS credentials can be shared with third-parties; proctoring may not detect collusion (e.g. messaging with peers during exams).
    Group projects Low to medium Intended collaboration can mask inappropriate collusion. One student may do all work or external help can be used. LMS tools may hide individual contributions. Students often have a clear understanding of what work their group members have undertaken. 
    Presentations (recorded or online) Low to medium Higher collusion risk — peers may co-develop or edit content. Scripts or full videos can be produced externally and uploaded, increasing risk of deepfakes.
    Peer review tasks Low to medium Students may coordinate reviews with friends, give favourable feedback, or copy others’ responses. Online platforms enable manipulation.
    Take-home exams (time-limited) Low Students can collaborate informally or share answers. LMS access can be shared to allow real-time assistance.
    Online quizzes (untimed/open-book) Low High collusion risk — students may complete quizzes together or share answers. Easy to outsource via shared LMS access.
    Essays / research papers Low High risk of contract cheating and peer collaboration. Students may exchange drafts or copy structure/arguments. LMS submission portals may be accessed by others.
    Portfolios / reflective journals Low Often completed individually, but prompts and reflections can be shared or copied between peers. LMS access may be used to upload third-party or peer-written content.
    Discussion board posts / participation Low Very high risk of collusion — students often copy or paraphrase each other’s posts. LMS accounts may be shared with others to post on behalf of students.
    Capstone projects / theses Low Students may collaborate inappropriately on research or writing. Risk of peer editing or contract cheating. Third-party LMS access may be used to submit externally produced work.

    Enhancing assessment security

    Various actions can be taken to enhance assessment security by making academic misconduct harder to engage in or easier to detect. Research into academic misconduct shows that some strategies that academics intuitively believe may enhance assessment security may not work, while other strategies are more effective. At the same time, avoiding low security assessments is recommended.

    Dawson (2021) noted three low-security assessments that are easily avoidable “assessment design mistakes”:

    1. summative unsupervised online tests and quizzes
    2. recycled assignments from previous teaching periods
    3. take-home assignments with a single correct answer.

    It has been suggested that constraining students’ time to work on assignments may make it harder for them to engage in contract cheating because it may be difficult to find someone to complete the assignment at short notice. Evidence shows, however, that making turnaround times shorter with the aim of limiting students’ ability to outsource assignments makes cheating more likely.

    Surveys of students indicate that they are more likely to outsource assignments when under time pressure (Bretag et al., 2019) and analysis of contract cheating providers shows repeated claims or producing assignments at short notice  (Wallace & Newton, 2014). These short turnaround times include rapidly providing answers to questions posted in online quizzes (Lancaster & Cotarlan, 2021).

    More viable options to enhance assessment security include:

    • using text-matching software to aid in detecting plagiarism
    • training markers to recognise signs of academic misconduct
    • monitoring file-sharing sites for uploaded course information and assessments
    • using platforms that monitor students’ access to assessments and record version histories of their work
    • monitoring students’ engagement with learning management systems
    • training invigilators of exams, and academics who supervise in-class tests, to recognise and respond appropriately to academic misconduct.

    References

    Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., ... & Van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856.

    Dawson, P. (2020). Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (1st ed.). Routledge.

    Lancaster, T., & Cotarlan, C. (2021). Contract cheating by STEM students through a file sharing website: a Covid-19 pandemic perspective. International Journal for Educational Integrity, 17(1), 3.

    Schuwirth, L. W. T., & Van der Vleuten, C. P. M. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478–485.

    The “2-lane” approach, Adam Bridgeman, Danny Liu and Ruth Weeks, University of Sydney.

    Wallace, M. J., & Newton, P. M. (2014). Turnaround time and market capacity in contract cheating. Educational Studies, 40(2), 233-236. 
     

    Last updated:
  • Principles for criteria and standards in assessment for gen AI use

    Banner with the text: Academic integrity toolkit: Case study

    Author: Dom McGrath, The University of Queensland

    Focus area: Assessment design

    Advancements in generative artificial intelligence (gen AI) capabilities and our responses are changing assessment practices. Where gen AI use is permitted in assessment, teaching staff are grappling with how to redesign these tasks to ensure they remain valid measurements of learning outcomes.  At the University of Queensland (UQ), we have developed principles to support the design of criteria and standards to support assessment practices where students may use gen AI (see below).

    Adapting rubrics in assessment where AI may be used: principles and implications for practice

    The following principles and examples have been developed to support UQ staff designing their open assessment, assessment where AI use is permitted. At UQ where there is no The principles are general advice to support design, not a policy position that must be followed. This advice has been developed in response to questions from UQ staff and students with input from the Transforming Assessment Team and the broader UQ Learning Design Community.

    Focus on the intended learning, not on catching cheating

    Principle: Criteria and standards should speak to the learning the task is designed to evidence.

    The availability of AI increases the need for clarity of learning intended to be assessed. Criteria and standards should be fair and transparently related to the Learning Outcomes of the course. Adding descriptors aimed at spotting misconduct confuses students and markers and rarely works. Instead, make explicit what learning must be demonstrated and how quality will be judged.

    Implications for practice

    • Start with verbs in the Learning Outcome – consider using them in the criterion stem (e.g., “analyse…”, “design…”).
    • Strip out “gotcha” language – no “demonstrates originality” or “work is human‑generated”.
    • Remind markers that suspicion ≠ evidence; direct them to assess with the standard descriptors.

    Plan a progression of AI expectations across courses (within programs and plans)

    Principle: Map how AI use, acknowledgement and rubric language mature across courses.

    Students’ learning experience is in multiple courses within and across semesters. Planning AI expectations and rubrics across plans and programs enhances students’ experience and reduces confusion by providing integrated guidance and expectations. Program and plan convenors may be well placed to lead work developing coherent plans for AI expectations.

    Implications for practice

    • Talk with colleagues teaching courses before, alongside, and after yours – consider similarities and differences in what is asked of students.
    • Talk with your students about expectations in your course and their other courses.

    Assess how AI is acknowledged, not what AI produced

    Principle: The content of AI Acknowledgements use should not impact marks; however, the inclusion and appropriate styling of the acknowledgment may be assessed.

    We cannot reliably verify every AI interaction, so we should incentivise honest, transparent reporting rather than punishments that could drive concealment. Providing students with clear guidance for acknowledgement that is not onerous will support responsible academic practices around transparency in AI use.

    Implications for practice

    • Where appropriate include acknowledgement as part of a criteria (e.g. alongside formatting, referencing styles, or other requirements).
    • Make acknowledgement guidance clear and as simple as possible including exemplars and guided practice.

    Assess (responsible) AI use when it is an outcome

    Principle: Where responsible AI engagement is explicitly listed in the learning outcomes, AI use can be required and included in rubric descriptors (e.g., defensibly selects model, uses effective prompts, evaluates and appropriately uses outputs).

    Principle: Where students have a choice to use AI in assessment, their choice to use AI should not impact how their work is assessed.

    Responsible AI use and ethics should be assessed when it is an explicit learning outcome. Across our programs we should be identifying multiple points where we teach and assess responsible disciplinary use of AI. Some level of secure assessment may be required to have confidence in how students are using AI.

    While we recognise the quality of a students’ work may be impacted by their use of AI, if we cannot reliably identify what students have done with AI we should not be using it as a basis of assessment. We cannot differentiate criteria and standards based on students’ declared AI use.

    Implications for practice

    • Where AI use is a Learning Outcome, clearly identify where and how it is assessed.
    • AI use can be recommended in any task but only required where AI use is a Learning Outcome.
    • Where AI use is not assessed, grade the output only; ignore whether AI was used.

    Provide equitable access—and where feasible an opt‑out to AI

    Principle: If a learning outcome requires AI, all students must have practical access and may be required to use it; where AI is optional, an equivalent non‑AI pathway should exist.

    Where AI is included in a course Learning Outcome, students must have suitable access to AI tools and may be required to use AI in assessment. Where AI is not included in a course Learning Outcome, students may be requested to use AI but a suitable alternative should be available to enable students to abstain from AI use.

    Implications for practice

    • Ensure students have suitable access to AI tools and communicate which tools are recommended.
    • Where AI is not assessed but recommended provide an alternative pathway: e.g. allow manual steps (e.g., hand‑sketch a design) with same criteria.
    • Ensure expectations are clearly communicated to students for example include statements like: “Students may choose not to use AI; all criteria can be met without it.” in the course site and assessment documents.

    Reduce weighting or assessment of offload-able activities (grammar, etc)

    Principle: Lower the weighting of activities that AI can automate; in many cases this includes grammar, spelling or basic graphic layout, unless they are core to the learning outcome.

    A growing range of activities can be offloaded to AI, in many assessments we require students to engage with these activities, but they are not related to the purpose of the assessment. For example, in many written tasks: grammar, spelling and written expression are required to be effective but are not the learning outcomes assessed. We should expect a higher standard in these areas for students to pass, but these criteria should not be the deciding factors if a students’ work should be recognised with a mark between a 6 or a 7.

    Implications for practice

    • Have clarity of the key learning outcomes students must acquire to focus attention and support in key areas that cannot be compromised.
    • Free up time to provided targeted support and guidance.

    Staff need to have current knowledge of AI and access to AI tools

    Principle: Staff designing and marking assessment must understand AI affordances and limitations and regularly review rubrics to ensure criteria remain fit for purpose.

    Implications for practice

    • Review your assessment and rubrics each semester – consider adding a standing agenda item to course review meetings.
    • Moderation checklist – how is the assessment being impacted by AI?
    Last updated:
  • An overview of culture and academic integrity: Myth busting the notion that international students are more likely to engage in academic misconduct

    Banner with the text: Academic integrity toolkit: Case study

    Author: Associate Professor Guy Curtis, University of Western Australia

    This short overview answers two common questions that people in higher education have about culture and academic integrity. These questions are:

    • Do international students cheat more than domestic students?
    • Do different cultures have different perspectives on academic integrity?

    Do international students cheat more than domestic students?

    No!

    Two of the biggest predictors of academic misconduct are students:

    1. lacking the understanding of academic integrity rules
    2. finding the academic expectations to be too difficult.

    Common misconceptions

    There is a common perception in Australian higher education that international students engage in plagiarism and cheating more than local Australian students. There are some reasons why this perception exists, and not all of them suggest that international students engage in academic misconduct any more than local Australian students.

    In the days before generative artificial intelligence (gen AI) and text-matching software, the most common form of academic misconduct was almost certainly plagiarism. When a native English speaker plagiarises, the clearly written text that they copied from a published source may not stand out in their assignment amongst their own native English writing. In contrast, when a non-native speaker includes a section of copied clear prose in the context of writing that has the hallmarks of a less fluent understanding of English, that plagiarised clear prose stands out.

    Consequently, plagiarism was more easily detectable in the writing of English as an Additional Language (EAL) international students, which gave the impression that international students plagiarised more than local students. There are still many academics working today who first started their careers marking assignments in the days before text-matching software and artificial intelligence, who carry the impression that international students engage in more misconduct because it used to be easier to spot when international students plagiarised. However, this perception may, at least partly, be an example of implicit bias.

    Local students vs international students

    In contrast to the expectations that international students engage in more plagiarism than local Australian students, several studies have found no differences in plagiarism rates between local Australian and international students (e.g. Maxwell et al., 2006; 2008). These studies have commented on the fact that many international students come from cultures that value education, where students from these cultures may assuage cheating because it undermines their learning (Chan, 1999). Other research also indicates that within a semester of studying in a different culture, international students have often learned and adapted to local expectations for educational assessment (Biggs & Watkins, 1996; Shafaei et al., 2016; Volet & Renshaw, 1995). Nonetheless, international students continue to be over-represented in academic misconduct cases (Zobel & Hamilton, 2002; Harris, 2025).

    Importantly, 2 of the largest and most thorough studies of serious cheating in higher education in Australia, which examined contract cheating, both found higher rates of contract cheating among international students than among local Australian students (Bretag et al., 2019; Curtis et al., 2022). However, the most interesting finding of both studies was that engagement in cheating was predicted more by EAL status than by international student status. What this means is that cheating may be something that students do because studying in their non-native language is hard. Although more international students than Australian students have English as an additional language, educators need to keep in mind that some local students do not have English as their first language and that some international students do have English as a first language.

    Confirmation bias

    Another reason why people believe that international students cheat more than domestic students is that many of the well-publicised cheating scandals in Australian higher education have involved international students. For example, the MyMaster scandal involved a website specifically marketing contract cheating services to Chinese-speaking students in Australia (Visentin, 2015).

    Cultural differences

    Not understanding rules may apply more to international students who have come from a context where academic integrity expectations are not the same as those of the Australian institution in which they're studying (Ehrich et al., 2016; Fatemi & Saito, 2020; James et al., 2017). As noted above, they will likely learn local expectations in Australia, but this does not necessarily happen straight away. Not understanding course content may apply to international students who face the added challenge of studying in their non-native language or who were admitted to study in Australia despite not satisfying minimum entry requirements for their course.

    In sum, there is some evidence which indicates that international students may sometimes engage in academic misconduct at higher rates than local students. However, there are also some critical lessons and caveats:

    • All students need to be considered as individuals, just because someone is from a particular culture it is not an indication that they have engaged, or will engage, in academic misconduct. Markers and decision-makers need to be aware of the potential for implicit bias.
    • Students who are new to Australia need clear guidance on academic integrity expectations in the Australian higher education context.
    • Students whose first language is not English may need additional help and support to be able to complete assignments with integrity.
    • Even if, on average, academic misconduct rates are higher among international students, keep in mind that local students can and do also engage in academic misconduct.

    Do different countries and cultures have different perspectives on academic integrity?

    Yes!

    Being aware of the variations in cultural attitudes around academic integrity, and the association between English language proficiency and academic misconduct, can help institutions in developing support and guidance for international students to:

    • best prepare them for academic study in Australia
    • understand the rules and expectations around academic integrity.

    In the previous section, we noted that international students do tend to adapt to local expectations, and providing specific instruction on academic integrity can help with this adaptation. Nonetheless, there are some potential pre-existing expectations that students from various countries and cultures may have that are not the same as those typically held by Australian students or Australian Higher Education Institutions.

    There are many and varied reasons that have been suggested for cross-national differences in academic integrity. Broad cultural dimensions have been suggested as playing a role:

    • Individualism versus collectivism and the extent to which students seek support, and in the extent to which students believe it is acceptable to work on assessment tasks with others as compared with completing them alone (Kasler et al., 2021; Tremayne & Curtis, 2021; Zhao et al., 2022).
    • Cultural dimension of power distance as potentially influencing academic integrity culture. For example, Asian and Confucian cultures are thought to be more deferential to expertise or seniority, with expectations that it may be less acceptable to paraphrase the words of an authority (James et al., 2019).
    • Differences in educational practices, rote learning and rote reproduction are favoured educational methods in some cultures more than others. Rote learning and reproduction of information may be accompanied by a lesser emphasis on plagiarism (Maxwell et al., 2016).

    However, cultural dimensions are not the only indicator of student behaviour. Cultural dimensions  interact with learning styles and student motivation. For example, research consistently shows lower rates of cheating in students whose goal is to learn as compared with students whose goal is to obtain performance outcomes like a qualification or high marks (Zhoa et al., 2024). This connection between performance orientation and cheating was stronger in cultures that were more individualistic and with lower power distance (i.e. Western cultures).

    Similarly, expectations on what constitutes good behaviour or normal behaviour in an academic context differ between countries. A cross-national study of student cheating found the highest rates of cheating occurred in the countries with the highest rates of perceived cheating among peers (Awdry 2021; Awdry & Ives, 2023). Cultural dimensions also interact with perceived norms. For example, although students are generally influenced by the perception of the extent of cheating among their peers, this influence is stronger for students from more collectivistic and high-power distance cultures, for example Asian cultures (Zhoa et al., 2022).

    As a consequence of some of these cultural differences, some studies suggest that behaviours and attitudes toward academic integrity vary. Below, some of the broader findings of such research is summarised.

    It is important to keep in mind that educational practices and attitudes vary substantially among institutions within countries and change rapidly with changing educational and social practices. Because of this, readers must bear in mind that overgeneralising these culture-based findings to individuals may unfairly stereotype students.

    Nonetheless, as mentioned earlier, the association between English language proficiency and academic misconduct means that students coming from any non-English speaking background may need additional support to avoid plagiarism and cheating.

    What are some of the common expectations and attitudes to academic integrity in other jurisdictions?

    China and other East-Asian countries

    Much has been written about Chinese students’ perceptions of academic integrity, attitudes to academic integrity, and cultural-based expectations concerning educational assessments. As a broad generalisation, Chinese students coming to Australia from high school may be less likely to have been exposed to ideas of plagiarism, citation and referencing than local Australian students. Chinese students who have studied at Chinese higher education institutions before coming to Australia may have experienced more permissive attitudes to plagiarism and collusion in their previous studies (Privitera, 2024, Yang et al., 2017). In either case, dedicated and early interventions to raise awareness of local rules and build academic writing skills are recommended.

    South-East Asia

    As with students from China, students from South-East Asia may on average receive less emphasis in their prior education on academic integrity than Australian students. There are developing networks in ASEAN to promote academic integrity (Roengtam, 2025) and a current initiative by the Malaysian government to reduce corruption, including in education.

    India and Pakistan

    Research on higher education in India and Pakistan suggest higher rates of exam cheating in India (Monica et al., 2010) and higher levels of plagiarism and cheating in Pakistan (Ghias et al., 2014) as compared with Australia. Some research suggests that such problems are “normalised” within the sub-continent, but caution that there are considerable inter-institutional differences (Ghias et al., 2014; Rehman & Waheed, 2014).

    USA, UK, and Canada

    Generally, studies show similar rates of academic misconduct among students in the English-speaking Western countries. To be clear, the rates of cheating and plagiarism vary substantially among studies depending on how these are defined and measured. Educational practices and rules differ in these Western English-speaking countries as compared with Australia. In the USA, a more moralistic and character-based perception of academic misconduct is widespread than in Australia, where educative policies and processes are preferred. The USA and Canada lack national-level quality regulators of higher education.

    Eastern Europe

    Although not a large source of international students to Australia, surveys regularly show higher levels of engagement in academic misconduct in Eastern Europe than in Australia (Awdry & Ives, 2023). Some of these differences have been attributed to external pressures faced by students and student norms that are more permissive of cheating.

    References

    • Awdry, R. (2021). Assignment outsourcing: Moving beyond contract cheating. Assessment & Evaluation in Higher Education, 46(2), 220-235.
    • Awdry, R., & Ives, B. (2023). International predictors of contract cheating in higher education. Journal of Academic Ethics, 21(2), 193-212.
    • Biggs, J., & Watkins, D. (1996). The Chinese learner in retrospect. The Chinese learner: Cultural, psychological, and contextual influences, 269-285.
    • Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., ... & Van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856.
    • Chan, S. (1999). The Chinese learner–a question of style. Education+ training, 41(6/7), 294-305.
    • Curtis, G. J., McNeill, M., Slade, C., Tremayne, K., Harper, R., Rundle, K., & Greenaway, R. (2022). Moving beyond self-reports to estimate the prevalence of commercial contract cheating: An Australian study. Studies in Higher Education, 47(9), 1844-1856.
    • Ehrich, J., Howard, S. J., Mu, C., & Bokosmaty, S. (2016). A comparison of Chinese and Australian university students' attitudes towards plagiarism. Studies in Higher Education, 41(2), 231-246.
    • Fatemi, G., & Saito, E. (2020). Unintentional plagiarism and academic integrity: The challenges and needs of postgraduate international students in Australia. Journal of Further and Higher Education, 44(10), 1305-1319.
    • Ghias, K., Lakho, G. R., Asim, H., Azam, I. S., & Saeed, S. A. (2014). Self-reported attitudes and behaviours of medical students in Pakistan regarding academic misconduct: a cross-sectional study. BMC medical ethics, 15(1), 43.
    • Harris, C. (2025, 8 July). The Sydney university students submitting fake medical certificates. Sydney Morning Herald.
    • Kasler, J., Zysberg, L., & Gal, R. (2021). Culture, collectivism-individualism and college student plagiarism. Ethics & Behavior, 31(7), 488-497.
    • Maxwell, A. J., Curtis, G. J., & Vardanega, L. (2006). Plagiarism among local and Asian students in Australia. Guidance & Counselling, 21(4), 210–215.
    • Maxwell, A. J., Curtis, G. J., & Vardanega, L. (2008). Does culture influence understanding and perceived seriousness of plagiarism? International Journal for Educational Integrity, 4(2), 25–40. doi:10.21913/IJEI.v4i2.412.
    • Monica, M., Ankola, A. V., Ashokkumar, B. R., & Hebbal, I. (2010). Attitude and tendency of cheating behaviours amongst undergraduate students in a Dental Institution of India. European Journal of Dental Education, 14(2), 79-83.
    • Privitera, A. J. (2024). Is there a foreign language effect on academic integrity? Higher Education, 88(2), 609-626.
    • Rehman, R. R., & Waheed, A. (2014). Ethical Perception of University Students about Academic Dishonesty in Pakistan: Identification of Student's Dishonest Acts. Qualitative Report, 19, 7.
    • Roengtam, S. (2025). Development of an Ecosystem to Enhance Academic Integrity in Thai Universities. Journal of Information Systems Engineering and Management, 10(25).
    • Shafaei, A., Nejati, M., Quazi, A., & Von der Heidt, T. (2016). ‘When in Rome, do as the Romans do’ Do international students’ acculturation attitudes impact their ethical academic conduct? Higher Education, 71(5), 651-666.
    • Tremayne, K., & Curtis, G. J. (2021). Attitudes and understanding are only part of the story: self-control, age and self-imposed pressure predict plagiarism over and above perceptions of seriousness and understanding. Assessment & Evaluation in Higher Education, 46(2), 208-219.
    • Visentin, L. (2015). MyMaster essay cheating scandal: More than 70 university students face suspension. Sydney Morning Herald. Retrieved from www.smh.com.au Accessed 23 August 2016
    • Volet, S. E., & Renshaw, P. D. (1995). Cross-cultural differences in university students' goals and perceptions of study settings for achieving their own goals. Higher Education, 30(4), 407-433.
    • Volet, S. E., & Renshaw, P. D. (1996). Chinese students at an Australian university: Adaptability and continuity. In The Chinese learner: Cultural, psychological and contextual influences (pp. 205-220). Hong Kong University Press.
    • Yang, S. C., Chiang, F. K., & Huang, C. L. (2017). A comparative study of academic dishonesty among university students in Mainland China and Taiwan. Asia Pacific Education Review, 18(3), 385-399.
    • Zhao, L., Mao, H., Compton, B. J., Peng, J., Fu, G., Fang, F., ... & Lee, K. (2022). Academic dishonesty and its relations to peer cheating and culture: A meta-analysis of the perceived peer cheating effect. Educational Research Review, 36, 100455.
    • Zhao, L., Yang, X., Yu, X., Zheng, J., Mao, H., Fu, G., ... & Lee, K. (2024). Academic Cheating, Achievement Orientations, and Culture Values: A Meta-Analysis. Review of Educational Research, 00346543241288240.

    Recommended readings

    Last updated: