• Detecting plagiarism of AI-generated text in student assessments and securing take-home written assessments

    Guy Curtis, University of Western Australian

    Since the release of ChatGPT in November 2022, a major concern for many academics has been students copying and pasting text produced by generative artificial intelligence (gen AI) programs into their assignments without acknowledgment. Such unacknowledged copying and pasting meets the traditional definition of plagiarism and is a case of academic misconduct.

    Substantiating cases of academic misconduct requires proving on the balance of probabilities that misconduct has occurred. This means that the evidence shows that misconduct is more likely to have occurred than not. A detected case is one that meets this standard of proof and is not overturned on appeal (Ellis et al., 2020). Finding sufficient evidence to prove plagiarism from gen AI is more challenging than substantiating plagiarism from published sources.

    In general, there is a strong case that substantive and systematic assessment redesign is needed in the age of gen AI (Corbin et al., 2025). In particular, highly secure assessments should be used to assess or verify key learning outcomes at a program level. In so doing, excellent guidance can be found in the University of Sydney’s Two-lane approach where assessments in lane one are highly resourced and secure and would occur at key points in a course (or unit) to gain assurances of student learning outcomes  and assessments to facilitate learning which are not as highly resourced or secure would be in  the more open lane 2  (Bridgeman, Liu, & Weeks, 2024; Liu & Bridgeman, 2023). Using artificial intelligence tools responsibly in your studies and assessments places take-home written assessments, which would typically be a concern for instances of plagiarism, in the “open” (lane 2) category, where gen AI use is permitted but must be acknowledged.

    In applying the two-lane approach to a written assessment, it is still necessary to detect instances of plagiarism in the form of unacknowledged inclusion of gen AI content. In addition, it has been argued that for educational reasons, in limited circumstances, educators may need to restrict the use of gen AI in some written assessments that are not completed under closely supervised in-class conditions (Curtis, 2025). Because of this, some capacity to detect plagiarism from gen AI is needed.

    Given that assessment security involves both making it more difficult to engage in misconduct, and easier to detect misconduct, an important consideration is whether take-home written assessments can be made more secure.

    Securing take-home written assessments

    Pre-gen AI, a typical take-home written assessment, such as an essay, would be completed by a student in their own time on their own device and they would only submit a completed piece of work, such as a Word or PDF document.  Although text-matching software provides security for such work against traditional copy-paste plagiarism, such assignments have always been relatively low in assessment security and vulnerable to academic misconduct such as contract cheating. They are particularly insecure when educators recycle assignment topics year after year.

    Some measures have been suggested that can be put in place to make academic misconduct, such as contract cheating and copying and pasting from gen AI, easier to detect in take-home written assignments. As well as improving ease of detection, such barriers to academic misconduct may also dissuade students from attempting to breach assessment rules, such as not acknowledging the inclusion of content pasted from gen AI, because the ability to detect such actions is more obvious.

    Strategy 1

    To improve security of take-home written assessments, students can be required to maintain and submit a verifiable version history of their work (e.g., Berukov, 2025). Using technologies such as Google Docs , Microsoft 365, or Overleaf, students may be able to record and provide evidence of their process of compiling a take-home written assessment.

    Strategy 2

    Instruct students to work within programs, or with programs, that are designed to track the writing process. Commercial programs such as Cadmus, Inktrail, Turnitin Clarity, and Grammarly Authorship, use functions such as recording when content is pasted into the writing platform and regularly auto-saving work such that the process of writing may be effectively “replayed”. These programmes may have the added benefit of tracking important data that can be used to identify instances of contract cheating, such as login times, durations and IP addresses.

    Using techniques such as monitoring version history and writing-in platforms provides educators with an opportunity to give students feedback on their process of writing an assessment, not just feedback on the final product.

    Securing take-home written assessments is a first-line defence against unacknowledged plagiarism from gen AI. Nevertheless, further options must be considered in how to detect plagiarism from gen AI when such security measures are used, and when they are not.

    Gen AI detection tools

    Since the early 2000s academics have relied on technological support to detect plagiarism in the form of text-matching software. However, while text-matching software links text to verifiable published sources and other students’ assignments, text produced by gen AI tools is not stored or published and therefore cannot be linked to text in a student’s assignment.

    In response to this problem, there have been various “gen AI detector” programs developed that attempt to estimate whether text was produced by gen AI. Such “gen AI detectors” examine linguistic and structural characteristics, including perplexity, burstiness and sentence structure, comparing them against patterns observed in both human and AI-generated text. This analysis leads to a probability estimate that text was AI-generated. However, people can display gen AI-style characteristics in their writing and gen AI tools can include “humanise” features or add-ons.

    As a consequence gen AI detector programs can at times falsely indicate that human-written text was AI-generated. Such false positives are highly problematic in the context of investigating plagiarism from gen AI and can create a high stress situation for students who have been false accused of misconduct. As a result, institutions should use such detection tools with caution.

    Current evidence for the accuracy of gen AI detector programs is mixed. These programs can reasonably distinguish 100% human-written and 100% gen AI-written text but are much less reliable when gen AI text is edited by a human, mixed with human-produced writing or documents are short (e.g. less than 300 words) (Weber-Wulff et al., 2023). Additionly, most detection programs can currently be bypassed by gen AI add-ons that “humanise”.

    Issues to consider when using gen AI detection tools to identify instances of academic misconduct:

    • The “AI score” alone is insufficient to bring an allegation of misconduct. Additional evidence is required to make an allegation of gen AI misuse.
      • low gen AI scores may also indicate gen AI-written text where an additional step has been taken to humanise the text. Again, any score, either high or low, is insufficient evidence by itself to allege misconduct
    • “Humanisation” add-ons can bypass gen AI detectors.
    • A score on a gen AI detector program is not the probability that the assignment was AI-generated. For example, if a detector has a 1% false-positive rate, it will flag 1 assignment in 100 as having a high score (e.g., 80-90%). If no students in a class of 100 used gen AI, one assignment will have a score of say 80-90% but the real probability that this assignment was AI-generated is zero.
    • Unlicensed gen AI-detector program use that is free or via a personal subscription to a third-party platform may be a breach of your IT policy, privacy rules, intellectual property rules or copyright.
    • To mitigate the risk of confirmation biases educators and investigators should look for evidence that disconfirms gen AI use in addition to evidence that may confirm gen AI use for assignments that have been flagged for gen AI content.

    Clear signals of gen AI use in written assessments

    • Obvious indicators of gen AI use that have unintentionally been pasted directly into an assessment such as,
      • “Certainly, I can give you an answer….”
      • “As a large language model…”
      • prompts used by students included with the text pasted into their assignment etc.
    • Inability of the student to answer questions about the assignment content, e.g. post-assignment viva.
    • Admission by student of unacknowledged use of gen AI.

    Possible signals of genAI use in written assessments

    • Disparity in student’s skill level — a mismatch is evident between the skill demonstrated in class and between assessments (e.g. supervised vs unsupervised, written vs oral). This may raise suspicions of other forms of misconduct, such as contract cheating.
    • Made-up (mashed-up) references — a reference that does not match another source in a text-matching program is a potential clue that the reference is fabricated. A mashed-up reference may be highlighted by text-matching software with different sources matching the title and journal, for example. Fabricated references are typically academic misconduct in and of themselves and may constitute a breach of academic integrity without any need to prove that they occurred because of the use of gen AI.
    • Perfectly written, mistake-free submissions—perfectly written, quickly produced submission may be a signal of misconduct (see Word document properties, information on copy/paste chips in write-in programs such as Cadmus or Inktrail, and/or the time taken to write or LMS metrics). It is important to remember that perfectly written text is not in itself a concern and may simply indicate good writing, permissible automated grammar checks and gen AI editorial assistance.
    • Awkward, inappropriate or unusually sophisticated word-choices, verbosity — waffle may be a stylistic clue that indicates the use of a paraphrasing tool or gen AI use.
    • Uniformly written responses — a lack of critical analysis that misses the point or fails to include key sources can be a signal of gen AI use.
    • Responses based on the title of the work — questions or summaries of sources appear to address key words in the title and not the content of the work.
    • Assignments that are produced quickly — assignments completed in extremely short time (see Word document properties for editing time or information on copy/paste chips and/or the time taken to write, or LMS metrics such as login times or time spent to answer a question).
    • Text volume lacking edits — a large volume of text produced quickly with no or minimal edits (see Word document properties or information on copy/paste and/or the time taken to write, or LMS metrics).
    • Lack of editing or evidence of writing process — text pasted into a document rather than typed (see Word document metadata [RSID codes] or information on copy/paste chips).
    • Assignment structure — answers or assignment content are mainly written as bullet points or numbered lists.
    • Whistleblowers — whistleblowers can be helpful in raising concerns about academic misconduct, their allegations must be independently verified with other evidence as it is possible for allegations to be malicious.

    References

    Bridgeman, A., Liu, D., & Weeks, R. (2024). Program level assessment design and the two-lane approach

    Berukov, N. (2025). Version control: how I combat the rise of generative AI in the classroom. Nature.

    Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education.

    Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research & Development

    Ellis, C., van Haeringen, K., & House, D. (2020a). Technology, policy and research: Establishing evidentiary standards for managing contract cheating cases. In T. Bretag (Ed.) A research agenda for academic integrity (pp. 138-151). Edward Elgar.

    Liu, D., & Bridgeman, A. (2023). What to do about assessments if we can’t out-design or out-run AI?

    Pitt, P., Dullaghan, K., & Sutherland-Smith, W. (2021). ‘Mess, stress and trauma’: Students’ experiences of formal contract cheating processes. Assessment & Evaluation in Higher Education, 46(4), 659-672. 

    Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., ... & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(26).

    Last updated:
  • Don’t be sorry, just declare it: Promoting academic integrity and securing the essay in the age of gen AI

    Banner with the text: Academic integrity toolkit: Case study

    Author: Benito Cao, The University of Adelaide

    Focus area: Making academic integrity visible

    The sudden irruption of generative artificial intelligence (gen AI) in higher education has sparked widespread concerns regarding the viability of essays as a form of assessment. Indeed, the argument often goes that large language models (LLMs) such as ChatGPT signal the death of the essay. But do they? Will they? Or, to paraphrase Mark Twain, is it that reports of the essay’s death are greatly exaggerated?

    This case study outlines a pedagogical initiative designed to promote academic integrity and secure the essay in the age of gen AI. The initiative reimagines the ‘two-lane’ approach that proposes the combination of secured assessments (lane 1) with unsecured open assessments (lane 2).

    The reimagined approach resembles the two-lane model in its partial reliance on in person assessments to validate student learning. Yet, it challenges the unrestricted nature of the Lane 2 approach by illustrating the value and validity of a ‘middle lane’ approach which focusses on the ecosystem to foster and facilitate authentic learning (Curtis 2025). The initiative relies on a pedagogical ecosystem designed to develop trust between students and teachers notwithstanding that academic integrity requires that we ‘trust but verify’ in cases of potential academic misconduct.

    The pedagogical ecosystem includes the following elements:

    1. an exploration of the potential for gen AI tools to fabricate information, with illustrations of real-world ‘hallucinations’
    2. the provision of clear guidelines, with references to university policies and industry standards to showcase the rationale and relevance of the guidelines
    3. the requirement to include a gen AI appendix when students use gen AI in the production of their essays
    4. a reminder that students are expected to fully understand every aspect of their essay, and that if there is a concern about the use of gen AI tools exceeding the assessment guidelines, they may be asked to discuss the assignment before the mark is finalised
    5. explicit advice to keep drafts, notes, annotated readings and any other materials students have used, as evidence of how their essay has been produced in case its authorship is questioned
    6. the reliance on secured (in-person) assessments, worth between 30% and 50% of the overall mark, to help validate student learning and to compare with the essay preliminary mark if there are academic concerns regarding the production of the essay.

    This pedagogical ecosystem is designed to enable the (relatively secured) implementation of a ‘middle lane’ approach, that permits a limited use of gen AI. Specifically, students are allowed to use gen AI tools to assist with idea generation and language expression. For example, I tell students:

    • if they struggle to come up with ideas for their essays, they can use gen AI but any ideas suggested by the tool must be validated
    • while they can use gen AI to assist with language expression, they should not allow the tool to take control of the narrative, that the narrative should reflect their own voice.

    In essence, students are allowed a limited use of gen AI but are expected to remain the authors of their essays and to be transparent regarding their use of gen AI tools. Students are reminded of this basic expectation of transparency in the assignment submission portal. This is the last thing they read before uploading their essay:

    Don't forget to include a gen AI appendix if you have used gen AI tools (for example, ChatGPT, Copilot, Gemini, Claude, Grammarly, etc.)  in the production of the essay. The absence of this appendix is equivalent to stating: I did not use GenAI. If this statement turns out to be false, this would constitute a breach of academic integrity. Remember the slogan: Don't be sorry, just declare it.

    The approach which I have titled, Don't be sorry, just declare it, reflects the integration of four normative principles: caution, trust, relevance and transparency  (Cao 2025). It is a slogan used by Australian Customs and Biosecurity warning people who arrive in Australia to declare all goods they might not be permitted to bring into the country, rather than apologise afterwards for the lack of a declaration.

    The evidence suggests that this approach can go a long way in addressing some of the most urgent pedagogical challenges posed by gen AI, particularly concerns with academic integrity. The evidence also suggests that this approach can improve security of the essay and thus contribute to its preservation as a valuable form of assessment in the age of gen AI.

    References

    Cao, B. (2025). Don’t Be Sorry, Just Declare It: Pedagogical Principles for the Ethical Use of ChatGPT, Master Bullshit Artist of Our Time. In: 11th International Conference on Higher Education Advances (HEAd’25). Valencia, 17-20 June 2025.

    Curtis, G. J. (2025). The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable. Higher Education Research and Development. (Published online: 18 March 2025).

    Last updated:
  • Principles for criteria and standards in assessment for gen AI use

    Banner with the text: Academic integrity toolkit: Case study

    Author: Dom McGrath, The University of Queensland

    Focus area: Assessment design

    Advancements in generative artificial intelligence (gen AI) capabilities and our responses are changing assessment practices. Where gen AI use is permitted in assessment, teaching staff are grappling with how to redesign these tasks to ensure they remain valid measurements of learning outcomes.  At the University of Queensland (UQ), we have developed principles to support the design of criteria and standards to support assessment practices where students may use gen AI (see below).

    Adapting rubrics in assessment where AI may be used: principles and implications for practice

    The following principles and examples have been developed to support UQ staff designing their open assessment, assessment where AI use is permitted. At UQ where there is no The principles are general advice to support design, not a policy position that must be followed. This advice has been developed in response to questions from UQ staff and students with input from the Transforming Assessment Team and the broader UQ Learning Design Community.

    Focus on the intended learning, not on catching cheating

    Principle: Criteria and standards should speak to the learning the task is designed to evidence.

    The availability of AI increases the need for clarity of learning intended to be assessed. Criteria and standards should be fair and transparently related to the Learning Outcomes of the course. Adding descriptors aimed at spotting misconduct confuses students and markers and rarely works. Instead, make explicit what learning must be demonstrated and how quality will be judged.

    Implications for practice

    • Start with verbs in the Learning Outcome – consider using them in the criterion stem (e.g., “analyse…”, “design…”).
    • Strip out “gotcha” language – no “demonstrates originality” or “work is human‑generated”.
    • Remind markers that suspicion ≠ evidence; direct them to assess with the standard descriptors.

    Plan a progression of AI expectations across courses (within programs and plans)

    Principle: Map how AI use, acknowledgement and rubric language mature across courses.

    Students’ learning experience is in multiple courses within and across semesters. Planning AI expectations and rubrics across plans and programs enhances students’ experience and reduces confusion by providing integrated guidance and expectations. Program and plan convenors may be well placed to lead work developing coherent plans for AI expectations.

    Implications for practice

    • Talk with colleagues teaching courses before, alongside, and after yours – consider similarities and differences in what is asked of students.
    • Talk with your students about expectations in your course and their other courses.

    Assess how AI is acknowledged, not what AI produced

    Principle: The content of AI Acknowledgements use should not impact marks; however, the inclusion and appropriate styling of the acknowledgment may be assessed.

    We cannot reliably verify every AI interaction, so we should incentivise honest, transparent reporting rather than punishments that could drive concealment. Providing students with clear guidance for acknowledgement that is not onerous will support responsible academic practices around transparency in AI use.

    Implications for practice

    • Where appropriate include acknowledgement as part of a criteria (e.g. alongside formatting, referencing styles, or other requirements).
    • Make acknowledgement guidance clear and as simple as possible including exemplars and guided practice.

    Assess (responsible) AI use when it is an outcome

    Principle: Where responsible AI engagement is explicitly listed in the learning outcomes, AI use can be required and included in rubric descriptors (e.g., defensibly selects model, uses effective prompts, evaluates and appropriately uses outputs).

    Principle: Where students have a choice to use AI in assessment, their choice to use AI should not impact how their work is assessed.

    Responsible AI use and ethics should be assessed when it is an explicit learning outcome. Across our programs we should be identifying multiple points where we teach and assess responsible disciplinary use of AI. Some level of secure assessment may be required to have confidence in how students are using AI.

    While we recognise the quality of a students’ work may be impacted by their use of AI, if we cannot reliably identify what students have done with AI we should not be using it as a basis of assessment. We cannot differentiate criteria and standards based on students’ declared AI use.

    Implications for practice

    • Where AI use is a Learning Outcome, clearly identify where and how it is assessed.
    • AI use can be recommended in any task but only required where AI use is a Learning Outcome.
    • Where AI use is not assessed, grade the output only; ignore whether AI was used.

    Provide equitable access—and where feasible an opt‑out to AI

    Principle: If a learning outcome requires AI, all students must have practical access and may be required to use it; where AI is optional, an equivalent non‑AI pathway should exist.

    Where AI is included in a course Learning Outcome, students must have suitable access to AI tools and may be required to use AI in assessment. Where AI is not included in a course Learning Outcome, students may be requested to use AI but a suitable alternative should be available to enable students to abstain from AI use.

    Implications for practice

    • Ensure students have suitable access to AI tools and communicate which tools are recommended.
    • Where AI is not assessed but recommended provide an alternative pathway: e.g. allow manual steps (e.g., hand‑sketch a design) with same criteria.
    • Ensure expectations are clearly communicated to students for example include statements like: “Students may choose not to use AI; all criteria can be met without it.” in the course site and assessment documents.

    Reduce weighting or assessment of offload-able activities (grammar, etc)

    Principle: Lower the weighting of activities that AI can automate; in many cases this includes grammar, spelling or basic graphic layout, unless they are core to the learning outcome.

    A growing range of activities can be offloaded to AI, in many assessments we require students to engage with these activities, but they are not related to the purpose of the assessment. For example, in many written tasks: grammar, spelling and written expression are required to be effective but are not the learning outcomes assessed. We should expect a higher standard in these areas for students to pass, but these criteria should not be the deciding factors if a students’ work should be recognised with a mark between a 6 or a 7.

    Implications for practice

    • Have clarity of the key learning outcomes students must acquire to focus attention and support in key areas that cannot be compromised.
    • Free up time to provided targeted support and guidance.

    Staff need to have current knowledge of AI and access to AI tools

    Principle: Staff designing and marking assessment must understand AI affordances and limitations and regularly review rubrics to ensure criteria remain fit for purpose.

    Implications for practice

    • Review your assessment and rubrics each semester – consider adding a standing agenda item to course review meetings.
    • Moderation checklist – how is the assessment being impacted by AI?
    Last updated:
  • Partnering for change: Ethical gen AI use and ensuring integrity in assessment transformation

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Tanya Henry and Associate Professor Christine Slade, Institute for Teaching and Learning Innovation (ITaLI), The University of Queensland

    Focus area: Assessment design

    The Lead through learning strategy 2025 - 2027 (the Strategy) is a whole-of-university strategy at The University of Queensland (UQ) aimed at addressing the impact of generative artificial intelligence (gen AI) in education and sits within the UQ’s AI in Education Action Plan (2025 – 2027). This initiative is a partnership between 5 faculties and the Deputy Vice-Chancellor (Academic) (DVC(A)) portfolio which aims to ensure graduates can use gen AI ethically and responsibly and that assessment practices assure learning outcomes.

    Learning designers are embedded in the faculties for 3 years to spearhead the cultural change in assessment and teaching practices in the light of gen AI.  As this is one piece of a broader program of work within the Strategy, the Learning Design (LD) team is led by a Strategic Lead based in the central teaching unit, who provides leadership and mentorship to the team of faculty-based learning designers and is the conduit between LDs and the DVC(A).

    The Strategy has 2 main goals:

    • Preparing students for responsible gen AI use by equipping students with ethical and practical skills they can use in their studies, careers and communities, and preparing them to lead and shape the future of gen AI integration in their fields.
    • Maintaining the integrity of the learning process by ensuring that academic standards are upheld through secure and credible assessment practices.

    Partnering with faculties to achieve these goals enables contextualised approaches within disciplines, with each faculty developing an operational plan that reflects their individual context. The central teaching and learning unit, the Institute for Teaching and Learning Innovation (ITaLI), complements this approach, upskilling gen AI use and assessment transformation, providing institutional guidance and facilitating collaboration.

    Within faculties learning designers, in collaboration with teaching staff, are developing and delivering workshops to support staff in using gen AI including how to enhance the validity and security of assessments. Across faculties staff are engaging in communities of practice including the establishment of an AI Steering Committee to explore the development of a whole-of-faculty gen AI curriculum.

    Key lessons or points for implementation

    • Define success and leverage existing data:
      Clearly articulate what success looks like in advancing the project’s core goals and how progress will be measured. Engage with colleagues who can identify existing data sources and explore future possibilities to support evidence-based decision-making.
    • Integrate with other initiatives to maximise impact and minimise change fatigue:
      Assessment transformation should align with other strategic initiatives, such as inclusive design and indigenising the curriculum, to create synergies rather than silos. This approach fosters collaboration, reduces duplication of effort and helps avoid staff fatigue by streamlining change.
    • Support educators through incremental, reflective change:
      Meet educators where they are and guide them through manageable, meaningful steps in assessment reform. Celebrate small wins, reflect on what works and what doesn’t, using a continuous improvement approach.
    • Contextual partnerships across the university:
      Connecting both top down and ground up goals is important to support staff buy-in where success requires teaching and assessment practices to change.
    Last updated:
  • An overview of culture and academic integrity: Myth busting the notion that international students are more likely to engage in academic misconduct

    Banner with the text: Academic integrity toolkit: Case study

    Author: Associate Professor Guy Curtis, University of Western Australia

    This short overview answers two common questions that people in higher education have about culture and academic integrity. These questions are:

    • Do international students cheat more than domestic students?
    • Do different cultures have different perspectives on academic integrity?

    Do international students cheat more than domestic students?

    No!

    Two of the biggest predictors of academic misconduct are students:

    1. lacking the understanding of academic integrity rules
    2. finding the academic expectations to be too difficult.

    Common misconceptions

    There is a common perception in Australian higher education that international students engage in plagiarism and cheating more than local Australian students. There are some reasons why this perception exists, and not all of them suggest that international students engage in academic misconduct any more than local Australian students.

    In the days before generative artificial intelligence (gen AI) and text-matching software, the most common form of academic misconduct was almost certainly plagiarism. When a native English speaker plagiarises, the clearly written text that they copied from a published source may not stand out in their assignment amongst their own native English writing. In contrast, when a non-native speaker includes a section of copied clear prose in the context of writing that has the hallmarks of a less fluent understanding of English, that plagiarised clear prose stands out.

    Consequently, plagiarism was more easily detectable in the writing of English as an Additional Language (EAL) international students, which gave the impression that international students plagiarised more than local students. There are still many academics working today who first started their careers marking assignments in the days before text-matching software and artificial intelligence, who carry the impression that international students engage in more misconduct because it used to be easier to spot when international students plagiarised. However, this perception may, at least partly, be an example of implicit bias.

    Local students vs international students

    In contrast to the expectations that international students engage in more plagiarism than local Australian students, several studies have found no differences in plagiarism rates between local Australian and international students (e.g. Maxwell et al., 2006; 2008). These studies have commented on the fact that many international students come from cultures that value education, where students from these cultures may assuage cheating because it undermines their learning (Chan, 1999). Other research also indicates that within a semester of studying in a different culture, international students have often learned and adapted to local expectations for educational assessment (Biggs & Watkins, 1996; Shafaei et al., 2016; Volet & Renshaw, 1995). Nonetheless, international students continue to be over-represented in academic misconduct cases (Zobel & Hamilton, 2002; Harris, 2025).

    Importantly, 2 of the largest and most thorough studies of serious cheating in higher education in Australia, which examined contract cheating, both found higher rates of contract cheating among international students than among local Australian students (Bretag et al., 2019; Curtis et al., 2022). However, the most interesting finding of both studies was that engagement in cheating was predicted more by EAL status than by international student status. What this means is that cheating may be something that students do because studying in their non-native language is hard. Although more international students than Australian students have English as an additional language, educators need to keep in mind that some local students do not have English as their first language and that some international students do have English as a first language.

    Confirmation bias

    Another reason why people believe that international students cheat more than domestic students is that many of the well-publicised cheating scandals in Australian higher education have involved international students. For example, the MyMaster scandal involved a website specifically marketing contract cheating services to Chinese-speaking students in Australia (Visentin, 2015).

    Cultural differences

    Not understanding rules may apply more to international students who have come from a context where academic integrity expectations are not the same as those of the Australian institution in which they're studying (Ehrich et al., 2016; Fatemi & Saito, 2020; James et al., 2017). As noted above, they will likely learn local expectations in Australia, but this does not necessarily happen straight away. Not understanding course content may apply to international students who face the added challenge of studying in their non-native language or who were admitted to study in Australia despite not satisfying minimum entry requirements for their course.

    In sum, there is some evidence which indicates that international students may sometimes engage in academic misconduct at higher rates than local students. However, there are also some critical lessons and caveats:

    • All students need to be considered as individuals, just because someone is from a particular culture it is not an indication that they have engaged, or will engage, in academic misconduct. Markers and decision-makers need to be aware of the potential for implicit bias.
    • Students who are new to Australia need clear guidance on academic integrity expectations in the Australian higher education context.
    • Students whose first language is not English may need additional help and support to be able to complete assignments with integrity.
    • Even if, on average, academic misconduct rates are higher among international students, keep in mind that local students can and do also engage in academic misconduct.

    Do different countries and cultures have different perspectives on academic integrity?

    Yes!

    Being aware of the variations in cultural attitudes around academic integrity, and the association between English language proficiency and academic misconduct, can help institutions in developing support and guidance for international students to:

    • best prepare them for academic study in Australia
    • understand the rules and expectations around academic integrity.

    In the previous section, we noted that international students do tend to adapt to local expectations, and providing specific instruction on academic integrity can help with this adaptation. Nonetheless, there are some potential pre-existing expectations that students from various countries and cultures may have that are not the same as those typically held by Australian students or Australian Higher Education Institutions.

    There are many and varied reasons that have been suggested for cross-national differences in academic integrity. Broad cultural dimensions have been suggested as playing a role:

    • Individualism versus collectivism and the extent to which students seek support, and in the extent to which students believe it is acceptable to work on assessment tasks with others as compared with completing them alone (Kasler et al., 2021; Tremayne & Curtis, 2021; Zhao et al., 2022).
    • Cultural dimension of power distance as potentially influencing academic integrity culture. For example, Asian and Confucian cultures are thought to be more deferential to expertise or seniority, with expectations that it may be less acceptable to paraphrase the words of an authority (James et al., 2019).
    • Differences in educational practices, rote learning and rote reproduction are favoured educational methods in some cultures more than others. Rote learning and reproduction of information may be accompanied by a lesser emphasis on plagiarism (Maxwell et al., 2016).

    However, cultural dimensions are not the only indicator of student behaviour. Cultural dimensions  interact with learning styles and student motivation. For example, research consistently shows lower rates of cheating in students whose goal is to learn as compared with students whose goal is to obtain performance outcomes like a qualification or high marks (Zhoa et al., 2024). This connection between performance orientation and cheating was stronger in cultures that were more individualistic and with lower power distance (i.e. Western cultures).

    Similarly, expectations on what constitutes good behaviour or normal behaviour in an academic context differ between countries. A cross-national study of student cheating found the highest rates of cheating occurred in the countries with the highest rates of perceived cheating among peers (Awdry 2021; Awdry & Ives, 2023). Cultural dimensions also interact with perceived norms. For example, although students are generally influenced by the perception of the extent of cheating among their peers, this influence is stronger for students from more collectivistic and high-power distance cultures, for example Asian cultures (Zhoa et al., 2022).

    As a consequence of some of these cultural differences, some studies suggest that behaviours and attitudes toward academic integrity vary. Below, some of the broader findings of such research is summarised.

    It is important to keep in mind that educational practices and attitudes vary substantially among institutions within countries and change rapidly with changing educational and social practices. Because of this, readers must bear in mind that overgeneralising these culture-based findings to individuals may unfairly stereotype students.

    Nonetheless, as mentioned earlier, the association between English language proficiency and academic misconduct means that students coming from any non-English speaking background may need additional support to avoid plagiarism and cheating.

    What are some of the common expectations and attitudes to academic integrity in other jurisdictions?

    China and other East-Asian countries

    Much has been written about Chinese students’ perceptions of academic integrity, attitudes to academic integrity, and cultural-based expectations concerning educational assessments. As a broad generalisation, Chinese students coming to Australia from high school may be less likely to have been exposed to ideas of plagiarism, citation and referencing than local Australian students. Chinese students who have studied at Chinese higher education institutions before coming to Australia may have experienced more permissive attitudes to plagiarism and collusion in their previous studies (Privitera, 2024, Yang et al., 2017). In either case, dedicated and early interventions to raise awareness of local rules and build academic writing skills are recommended.

    South-East Asia

    As with students from China, students from South-East Asia may on average receive less emphasis in their prior education on academic integrity than Australian students. There are developing networks in ASEAN to promote academic integrity (Roengtam, 2025) and a current initiative by the Malaysian government to reduce corruption, including in education.

    India and Pakistan

    Research on higher education in India and Pakistan suggest higher rates of exam cheating in India (Monica et al., 2010) and higher levels of plagiarism and cheating in Pakistan (Ghias et al., 2014) as compared with Australia. Some research suggests that such problems are “normalised” within the sub-continent, but caution that there are considerable inter-institutional differences (Ghias et al., 2014; Rehman & Waheed, 2014).

    USA, UK, and Canada

    Generally, studies show similar rates of academic misconduct among students in the English-speaking Western countries. To be clear, the rates of cheating and plagiarism vary substantially among studies depending on how these are defined and measured. Educational practices and rules differ in these Western English-speaking countries as compared with Australia. In the USA, a more moralistic and character-based perception of academic misconduct is widespread than in Australia, where educative policies and processes are preferred. The USA and Canada lack national-level quality regulators of higher education.

    Eastern Europe

    Although not a large source of international students to Australia, surveys regularly show higher levels of engagement in academic misconduct in Eastern Europe than in Australia (Awdry & Ives, 2023). Some of these differences have been attributed to external pressures faced by students and student norms that are more permissive of cheating.

    References

    • Awdry, R. (2021). Assignment outsourcing: Moving beyond contract cheating. Assessment & Evaluation in Higher Education, 46(2), 220-235.
    • Awdry, R., & Ives, B. (2023). International predictors of contract cheating in higher education. Journal of Academic Ethics, 21(2), 193-212.
    • Biggs, J., & Watkins, D. (1996). The Chinese learner in retrospect. The Chinese learner: Cultural, psychological, and contextual influences, 269-285.
    • Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., ... & Van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856.
    • Chan, S. (1999). The Chinese learner–a question of style. Education+ training, 41(6/7), 294-305.
    • Curtis, G. J., McNeill, M., Slade, C., Tremayne, K., Harper, R., Rundle, K., & Greenaway, R. (2022). Moving beyond self-reports to estimate the prevalence of commercial contract cheating: An Australian study. Studies in Higher Education, 47(9), 1844-1856.
    • Ehrich, J., Howard, S. J., Mu, C., & Bokosmaty, S. (2016). A comparison of Chinese and Australian university students' attitudes towards plagiarism. Studies in Higher Education, 41(2), 231-246.
    • Fatemi, G., & Saito, E. (2020). Unintentional plagiarism and academic integrity: The challenges and needs of postgraduate international students in Australia. Journal of Further and Higher Education, 44(10), 1305-1319.
    • Ghias, K., Lakho, G. R., Asim, H., Azam, I. S., & Saeed, S. A. (2014). Self-reported attitudes and behaviours of medical students in Pakistan regarding academic misconduct: a cross-sectional study. BMC medical ethics, 15(1), 43.
    • Harris, C. (2025, 8 July). The Sydney university students submitting fake medical certificates. Sydney Morning Herald.
    • Kasler, J., Zysberg, L., & Gal, R. (2021). Culture, collectivism-individualism and college student plagiarism. Ethics & Behavior, 31(7), 488-497.
    • Maxwell, A. J., Curtis, G. J., & Vardanega, L. (2006). Plagiarism among local and Asian students in Australia. Guidance & Counselling, 21(4), 210–215.
    • Maxwell, A. J., Curtis, G. J., & Vardanega, L. (2008). Does culture influence understanding and perceived seriousness of plagiarism? International Journal for Educational Integrity, 4(2), 25–40. doi:10.21913/IJEI.v4i2.412.
    • Monica, M., Ankola, A. V., Ashokkumar, B. R., & Hebbal, I. (2010). Attitude and tendency of cheating behaviours amongst undergraduate students in a Dental Institution of India. European Journal of Dental Education, 14(2), 79-83.
    • Privitera, A. J. (2024). Is there a foreign language effect on academic integrity? Higher Education, 88(2), 609-626.
    • Rehman, R. R., & Waheed, A. (2014). Ethical Perception of University Students about Academic Dishonesty in Pakistan: Identification of Student's Dishonest Acts. Qualitative Report, 19, 7.
    • Roengtam, S. (2025). Development of an Ecosystem to Enhance Academic Integrity in Thai Universities. Journal of Information Systems Engineering and Management, 10(25).
    • Shafaei, A., Nejati, M., Quazi, A., & Von der Heidt, T. (2016). ‘When in Rome, do as the Romans do’ Do international students’ acculturation attitudes impact their ethical academic conduct? Higher Education, 71(5), 651-666.
    • Tremayne, K., & Curtis, G. J. (2021). Attitudes and understanding are only part of the story: self-control, age and self-imposed pressure predict plagiarism over and above perceptions of seriousness and understanding. Assessment & Evaluation in Higher Education, 46(2), 208-219.
    • Visentin, L. (2015). MyMaster essay cheating scandal: More than 70 university students face suspension. Sydney Morning Herald. Retrieved from www.smh.com.au Accessed 23 August 2016
    • Volet, S. E., & Renshaw, P. D. (1995). Cross-cultural differences in university students' goals and perceptions of study settings for achieving their own goals. Higher Education, 30(4), 407-433.
    • Volet, S. E., & Renshaw, P. D. (1996). Chinese students at an Australian university: Adaptability and continuity. In The Chinese learner: Cultural, psychological and contextual influences (pp. 205-220). Hong Kong University Press.
    • Yang, S. C., Chiang, F. K., & Huang, C. L. (2017). A comparative study of academic dishonesty among university students in Mainland China and Taiwan. Asia Pacific Education Review, 18(3), 385-399.
    • Zhao, L., Mao, H., Compton, B. J., Peng, J., Fu, G., Fang, F., ... & Lee, K. (2022). Academic dishonesty and its relations to peer cheating and culture: A meta-analysis of the perceived peer cheating effect. Educational Research Review, 36, 100455.
    • Zhao, L., Yang, X., Yu, X., Zheng, J., Mao, H., Fu, G., ... & Lee, K. (2024). Academic Cheating, Achievement Orientations, and Culture Values: A Meta-Analysis. Review of Educational Research, 00346543241288240.

    Recommended readings

    Last updated:
  • Addressing copyright infringement on student academic file sharing sites

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Associate Professor Christine Slade and Dr James Lewandowski-Cox, The University of Queensland

    Focus area: Academic integrity breach decision-making

    Unethical academic file sharing continues to pose serious risks to both academic integrity and copyright compliance, particularly as platforms incentivise students to upload institutional content (Seeland et al., 2022; Rogerson & Basanta, 2016). Large-scale implementation of copyright takedown procedures remains a significant challenge for institutions (Seeland et al., 2022).

    The Academic Student File Sharing (ASFS) pilot (the Pilot) at UQ, evaluated the effectiveness of copyright takedown notices to address academic integrity issues arising from student file sharing on platforms such as CourseHero and StuDocu. These platforms hosted over 75,000 files tagged as UQ content, often uploaded by students in exchange for incentives like premium access or cash rewards. The Pilot aimed to remove 5% of UQ files from each site – 3,277 from CourseHero and 497 from StuDocu. It exceeded these targets, successfully removing 3,486 from CourseHero (5.32%) and 703 files from StuDocu (7.07%) using 169.75 hours of staff time. All removed files remained offline as of July 2023, demonstrating the viability of copyright enforcement as a sustainable strategy.

    Files with clear UQ branding or staff email addresses were removed more easily, while non-branded materials required additional provenance. CourseHero’s ‘PinPoint’ tool enabled efficient bulk takedowns, whereas StudDoc’s form required individual submissions. To meet targets, UQ deployed ‘working bees’ with trained library staff to manage the process.

    Upload filter testing revealed that CourseHero blocked files containing the phrase ‘This content is protected and may not be shared, uploaded or distributed’, while StuDocu’s filters focused more on content quality than copyright. Both platforms incentivised uploads, with StuDocu offering direct cash payments, raising concerns about breaches of UQ’s Student Code of Conduct.

    This pilot offers a practical and replicable model for institutions facing similar challenges. Its tools, strategies and insights are transferable, helping universities protect intellectual property (IP) and uphold academic integrity. While effective, ongoing success depends on institutional commitment, consistent branding practices and sector-wide collaboration.

    Key lessons or points for implementation

    • Develop a communications package to inform academic staff about file sharing risks and allocate staff resources to maintain takedown efforts.
    • Use consistent branding and embed CourseHero’s copyright phrase in teaching materials.
    • Consider misconduct action for students who upload institutional intellectual property for gain.

    References

    • Rogerson, A.M., & Basanta, G. (2016). Peer-to-peer file sharing and academic integrity in the internet age.  In T. Bretag (Ed.), Handbook of Academic Integrity, (pp.273-285). Springer Reference. DOI 10.1007/978-981-287-098-8_55
    • Seeland, J., Eaton, S.E., & Stoesz, B.M. (2022). Leveraging College Copyright Ownership Against File-Sharing and Contract Cheating Websites. In S.E. Eaton et al. (Eds.), Contract Cheating in Higher Education, (pp. 61-76). Palgrave Macmillan.  doi.org/10.1007/978-3-031-12680-2_5 
       
    Last updated:
  • Belonging, academic integrity and my international students

    Banner with the text: Academic integrity toolkit: Case study

    Author: Dr Katherine Sugars, Murdoch University

    Focus area: Partnering with students

    Academic misconduct is a wicked problem; we need a cornucopia of strategies. I find building a sense of belonging can reduce ad hoc misconduct risks and has a positive effect on student commitment to academic integrity — individually and as a social group norm. Designing a learning space around belonging has multiple other benefits. It:

    • helps students engage and dive into novel learning experiences
    • strengthens relationships and trust
    • enhances individual wellbeing.

    The more my students teach me about their worldview, the more I can make sense of their experiences and actions. I can then be more purposeful in my unit design and teaching and improve outcomes. I am not the only one who finds that increased agency in a more predictable learning space inspires commitment to learning and group wellbeing. In the context of academic integrity, this can mean to do what is right and fair for everyone, and cooperate with group rules, until they become second nature: Just what we do here.

    I teach an academic skills unit for international master’s students, typically those who have newly arrived in Australia. Most of my students are Bhutanese with an eclectic mix of other nationalities; differences matter, as do common woes. The challenge is to build a shared identity and a classroom experience that is flexible and engaging but includes non-negotiables — in this case academic integrity. Students must adapt, but I can too (within policy constraints). I can build bridges, move boundaries, re-order priorities, be responsive to shifting needs of cohorts and individuals, and I can see myself as one of “us” while we journey together.

    Coming to Australia to earn a degree is a big transition. It is not just logistics, culture shock or even homesickness — it is social standing and security in their understanding of the world and their place in it. Former professionals work as Uber drivers, cleaners or in aged care and struggle to pay rent. Expectations and reality are far apart. The rules seem to make life harder and more confusing. It takes time to regain stability, self-confidence and belongingness, and this comes from finding agency, choosing our own actions and learning what to expect in response.

    How can this understanding influence unit design and class activities?

    First, I design for and teach whole human beings, who are courageous and capable, and who are dealing with a lot of stuff right now. Be kind and empathetic, actively affirm this shared experience. I use their life experience as a topic for class activities and assessments. I resist saying everyone is in the same boat, even though it is true, because this diminishes their experience and is disempowering, a conversation stopper (as are judging and fixing). I try not to think “they made their choice” even when I am mad at them, because this happened in the past and can’t be fixed. It is useful for judging but not problem solving (same with “should” and “should have”).

    I provide all the stability and predictability I possibly can. Help them build a connected support structure, in class, on the learning management system (LMS), within the university and with classmates, friends and family. Stabilise their learning environment. Not in a rigid way, but rather a routinely engaging, welcoming, easy-to-participate and fun way. Remove unnecessary barriers and make acting the way you hope the easiest and most rewarded choice. Attendance is the first step in relationship building and, oh yes... learning. I see myself as a key support person. The more students sense I care for them, the more they commit to genuine engagement in my unit. I try to make it easy to get it right.

    Global grand challenges and real-life experiences make great topics for practicing academic skills. There is no right (or wrong) answer and they offer something for everyone to engage with. When giving feedback on weekly journal writing I will often engage with the substance of a student's entry as well as the scholarship. I recall meeting with a student who I had given 5 fails in a row to. I was repeating my mantra, “I want to hear your voice”, and he suddenly looked at me in wonder and said “you really care about what I have to say.” It was a moment. He submitted some brilliant persuasive writing after that. Plus, we were both much happier. The fails weren’t because he couldn’t, or was lazy or entitled, they were because he did not value the activity enough and part of that was because he thought I didn’t value him.

    I have had to rethink my priorities. I value genuine voice over polished grammar and spelling (it can be hard to convince students this is true and that it will be reflected in their marks). I value integrity far more than due dates. I recall brainstorming in class “what to do when there is 2 hours until your assignment is due and you haven’t started”. We filled a whole whiteboard (including use ChatGPT, copy from a friend and outsource) and I still needed to be the one to suggest “ask for an extension.” 
    Bhutanese students have a deep respect for their elders and their teachers, and this can create strange situations where they employ a work-around when I expect them to ask directly. I try to predict when this might happen, with my crystal ball, and be explicit (repeatedly) that questioning is allowed, nay, rewarded and rewarding! I even model being wrong. It may be painful but it's good for my soul. In fact, it is hard to critically engage at all unless you allow yourself to question others and welcome questions without defensiveness.

    I encourage even the smallest risk-taking when it comes to learning. I read in a student portfolio that her tutor simply saying her comment in class was excellent changed her whole outlook: her confidence, her commitment to the work and her joy in challenging herself. The risk-taking doesn’t have to be content-related. My Bhutanese students have a wicked sense of humour and are — a surprise at first — fiercely competitive. We play games. Not all students find ‘Fruit salad’ fun, so I’m told in feedback, but laughter and movement change everything. A student might be too anxious to contribute an idea to a group discussion but happy to hip-check me over the last free chair — a step in the right direction, I say. For some students who remain quiet in class, we may dialogue privately via their weekly portfolios - their creations, my responses. Building trust and allowing a safe learning space to conquer anxiety about being judged by others. Silently can be an equally valuable way to conduct an interpersonal learning and teaching relationship.

    I make every effort to reduce risk and tragic consequences of mistakes, while establishing a clear cause and effect expectation for misconduct. Early, low-stakes, formative assessment (and best four of five fortnightly submissions) allows for early zeros. I find a zero is remarkably effective individual feedback and normative boundary setting. High-stakes assessments are high security. This minimises my own uncertainty and errors, and lowers the risk of unit failure from one mistake.

    I try to be explicit with students about what I expect when it comes to academic integrity —the principle is easy, the details are harder. It is a fuzzy line, because I want them to develop their own good judgement about the use of generative artificial intelligence (gen AI) and collaborating with group members. Both can be valuable, both can enhance learning. Errors in judgement can be teaching moments. But deliberately misrepresenting authorship draws a penalty, as does carelessly misrepresenting authorship with the attitude that it does not really matter. I aim to pick it up every time. I’m delusional, I know (I’d love to know my actual hit and miss ratio).

    Creating a fun, safe learning space that belongs to everyone helps each student take social and cognitive risks. When I ask students what tempts them to cheat, they say time pressure, not understanding requirements, thinking they will get a better mark and life being overwhelming. I can lessen some of these drivers with technical fixes, but student feedback consistently and overwhelmingly says that knowing their contribution to the group is valued and being supported as a human being, is what they value most. Students report being motivated by this sense of inclusion and mutual regard to have self-confidence and maintain integrity when hurdles appear.

    Key lessons or points for implementation

    • Reduce disorientation by providing stability and a shared group identity, have fun together, establish academic integrity as a group norm.
    • Reassess priorities, be clear and strong on what matters.
    • Encourage learning risks, make it safe, make it personal. Make it the best and most enjoyable option for the student to do the work themselves.
    • Back-up plan: Stick to your word and penalise academic misconduct.
    Last updated:
  • Individual support appointments for academic integrity breach education

    Banner with the text: Academic integrity toolkit: Case study

    Authors: Fiona Perry, Dr Anu Sharma, Associate Professor Michelle Cavaleri (Dean, Academic), Margaret Redestowicz, Education Centre Australia Higher Education.  

    Focus area: Academic integrity education

    Education Centre Australia’s Higher Education division (ECA HE) comprises the Asia Pacific International College (APIC), offering business, project management and IT programs; the College of Health Science (CHS), specialising in health management programs; and, the Higher Education Leadership Institute (HELI), delivering eLearning and research programs. While operating as separate institutions, these entities work closely together, leveraging shared staff, resources and knowledge.

    Our student body is predominantly international, representing diverse cultural and educational backgrounds from the Indian subcontinent (India, Nepal, Pakistan, Bangladesh), Africa (Kenya, Ghana, Zimbabwe), South America (Brazil, Colombia, Argentina) and Asia-Pacific (Philippines, Indonesia, China, Fiji), alongside a smaller cohort of domestic students.

    These diverse backgrounds can create academic challenges that require targeted support. International students often come from educational systems with different approaches to collaboration, citation and source usage, making it challenging for them to navigate the specific academic integrity expectations and research standards of our institution.

    To better support students reported for academic integrity breaches, ECA HE implemented a proactive booking system that automatically offers individual support appointments with learning advisors. These appointments were offered in addition to the usual penalties such as redoing the academic integrity module, resubmitting work and other penalties in line with our Academic Integrity Policy and Procedure. These appointments aim to:

    • gain a better understanding of student circumstances that contribute to academic integrity breaches
    • provide personalised support addressing specific resubmission requirements and underlying causes to prevent future breaches
    • share relevant implications with appropriate stakeholders including unit coordinators, course coordinators and learning designers.

    Appointments typically take between 20 and 45 mins and students can pick a time and mode that suits them either online or face-to-face. To set up the appointments we created a new booking system that links to the academic integrity breach database to automate invitations to students who are reported. ECA HE developed training for staff on best practice for conducting appointments. This training covered record keeping, aims and goals of the sessions, tools and techniques that can be used and common challenges, with suggested strategies to overcome these. Furthermore, an example video of a consultation was created. The video is annotated to demonstrate effective strategies advisors can use.

    Key lessons or points for implementation

    1. Students reported for academic integrity breaches frequently require significant training in how to access library resources and websites, and navigate digital platforms such as academic support websites, the learning management system and Turnitin.  Academically, students need support building essential skills such as paraphrasing and referencing. Furthermore, students want guidance on the ethical use of generative artificial intelligence (gen AI).
    2. In relation to wellbeing, students seek support for mental health concerns and assistance with managing health concerns. Students often present to appointments with challenges related to their socioeconomic and personal circumstances, including housing instability, employment pressures and financial difficulties, all of which can interfere with their ability to engage fully in their academic work. Advisors sometimes need to show students how to apply for special consideration and extensions. Staff need to be up to date on what supports are available and how to refer students.
    3. Staff delivering individual consultations need training on how to best support students. This includes tailoring sessions to individuals and scaffolding and modelling the use of library and academic support resources. These staff also need to be equipped to deal with the multitude of wellbeing issues that can come up when working closely with individual students.
    Last updated:
  • The benefit of using structured questions and evidence to investigate alleged plagiarism by first-year students

    Banner with the text: Academic integrity toolkit: Case study

    Author: Guy Curtis, University of Western Australia

    Focus area: Academic integrity breach decision-making

    For about 3 years I acted as the academic staff member within my School and was responsible for dealing with cases of alleged student academic misconduct. In my School there were large first-year psychology units that often had enrolments of around 1,000 students.  The first edition of the TEQSA Academic Integrity Toolkit included a guide to substantiating contract cheating, within which is “An investigator template for conducting a student academic integrity interview”. I decided to use this template for interviews with students who were reported by their first-year psychology unit coordinators for plagiarism.

    Looking at text-matching software (for example, Turnitin), it was often the case that first-year students had either failed to indicate quoted text with quotation marks and page numbers or failed to provide name and date citations on paraphrased text. Two questions in the Toolkit’s interview guide were particularly helpful in working out whether students misunderstood referencing rules, misapplied referencing rules or understood referencing rules and chose not to follow them:

    1. What referencing system/style did you use?
    2. In the referencing system you used, are there any differences in how quotes and paraphrased material should be represented?

    Students in first-year psychology would typically answer the first question with “APA style”. Their answer to the second question usually aligned with the errors in citations and referencing that were apparent in their assignment. For example, if a student had verbatim quotes without quotation marks, but with a name and date in brackets afterwards, they would usually say that the way to cite a quote is to put a name and year in brackets after it. This answer would reveal that they were trying to do the right thing, but didn’t understand that quotes had to be cited differently from paraphrased material. Similarly, students without citations on paraphrased materials might say that citations were only needed for quotes. First-year students’ answers to these structured questions often helped to foster educational conversations about citation rules and allowed me to direct them to relevant information about citation and writing.

    In units of 1,000 students, some will miss, or misunderstand, information about academic integrity without any intention of plagiarising or gaining an unfair advantage by doing so. When such students end up being referred for investigation into alleged misconduct, it can undoubtedly be stressful for them. Confronting students with accusations, rather than asking questions that seek to understand how their assessments were written, may exacerbate such stresses. The benefit of good questions in identifying educational gaps means that perceived punitive processes can be turned into teachable moments. At the same time, recording within the university system that these conversations had taken place meant that, going forward, ignorance of the rules could not be used as an excuse by the same students for future instances of plagiarism.

    Key lessons or points for implementation

    • A structured approach for investigating allegations of academic misconduct is crucial.
    • Using questions that allow investigators to seek information, rather than making allegations or assumptions leads to better outcomes.
    • Plagiarism among first-year students is often a case of misunderstanding or misapplying rules. An allegation of plagiarism for first-year students can often be a chance to correct misunderstanding via one-on-one educational conversations.
    Last updated: