Principles for criteria and standards in assessment for gen AI use

Banner with the text: Academic integrity toolkit: Case study

Author: Dom McGrath, The University of Queensland

Focus area: Assessment design

Advancements in generative artificial intelligence (gen AI) capabilities and our responses are changing assessment practices. Where gen AI use is permitted in assessment, teaching staff are grappling with how to redesign these tasks to ensure they remain valid measurements of learning outcomes.  At the University of Queensland (UQ), we have developed principles to support the design of criteria and standards to support assessment practices where students may use gen AI (see below).

Adapting rubrics in assessment where AI may be used: principles and implications for practice

The following principles and examples have been developed to support UQ staff designing their open assessment, assessment where AI use is permitted. At UQ where there is no The principles are general advice to support design, not a policy position that must be followed. This advice has been developed in response to questions from UQ staff and students with input from the Transforming Assessment Team and the broader UQ Learning Design Community.

Focus on the intended learning, not on catching cheating

Principle: Criteria and standards should speak to the learning the task is designed to evidence.

The availability of AI increases the need for clarity of learning intended to be assessed. Criteria and standards should be fair and transparently related to the Learning Outcomes of the course. Adding descriptors aimed at spotting misconduct confuses students and markers and rarely works. Instead, make explicit what learning must be demonstrated and how quality will be judged.

Implications for practice

  • Start with verbs in the Learning Outcome – consider using them in the criterion stem (e.g., “analyse…”, “design…”).
  • Strip out “gotcha” language – no “demonstrates originality” or “work is human‑generated”.
  • Remind markers that suspicion ≠ evidence; direct them to assess with the standard descriptors.

Plan a progression of AI expectations across courses (within programs and plans)

Principle: Map how AI use, acknowledgement and rubric language mature across courses.

Students’ learning experience is in multiple courses within and across semesters. Planning AI expectations and rubrics across plans and programs enhances students’ experience and reduces confusion by providing integrated guidance and expectations. Program and plan convenors may be well placed to lead work developing coherent plans for AI expectations.

Implications for practice

  • Talk with colleagues teaching courses before, alongside, and after yours – consider similarities and differences in what is asked of students.
  • Talk with your students about expectations in your course and their other courses.

Assess how AI is acknowledged, not what AI produced

Principle: The content of AI Acknowledgements use should not impact marks; however, the inclusion and appropriate styling of the acknowledgment may be assessed.

We cannot reliably verify every AI interaction, so we should incentivise honest, transparent reporting rather than punishments that could drive concealment. Providing students with clear guidance for acknowledgement that is not onerous will support responsible academic practices around transparency in AI use.

Implications for practice

  • Where appropriate include acknowledgement as part of a criteria (e.g. alongside formatting, referencing styles, or other requirements).
  • Make acknowledgement guidance clear and as simple as possible including exemplars and guided practice.

Assess (responsible) AI use when it is an outcome

Principle: Where responsible AI engagement is explicitly listed in the learning outcomes, AI use can be required and included in rubric descriptors (e.g., defensibly selects model, uses effective prompts, evaluates and appropriately uses outputs).

Principle: Where students have a choice to use AI in assessment, their choice to use AI should not impact how their work is assessed.

Responsible AI use and ethics should be assessed when it is an explicit learning outcome. Across our programs we should be identifying multiple points where we teach and assess responsible disciplinary use of AI. Some level of secure assessment may be required to have confidence in how students are using AI.

While we recognise the quality of a students’ work may be impacted by their use of AI, if we cannot reliably identify what students have done with AI we should not be using it as a basis of assessment. We cannot differentiate criteria and standards based on students’ declared AI use.

Implications for practice

  • Where AI use is a Learning Outcome, clearly identify where and how it is assessed.
  • AI use can be recommended in any task but only required where AI use is a Learning Outcome.
  • Where AI use is not assessed, grade the output only; ignore whether AI was used.

Provide equitable access—and where feasible an opt‑out to AI

Principle: If a learning outcome requires AI, all students must have practical access and may be required to use it; where AI is optional, an equivalent non‑AI pathway should exist.

Where AI is included in a course Learning Outcome, students must have suitable access to AI tools and may be required to use AI in assessment. Where AI is not included in a course Learning Outcome, students may be requested to use AI but a suitable alternative should be available to enable students to abstain from AI use.

Implications for practice

  • Ensure students have suitable access to AI tools and communicate which tools are recommended.
  • Where AI is not assessed but recommended provide an alternative pathway: e.g. allow manual steps (e.g., hand‑sketch a design) with same criteria.
  • Ensure expectations are clearly communicated to students for example include statements like: “Students may choose not to use AI; all criteria can be met without it.” in the course site and assessment documents.

Reduce weighting or assessment of offload-able activities (grammar, etc)

Principle: Lower the weighting of activities that AI can automate; in many cases this includes grammar, spelling or basic graphic layout, unless they are core to the learning outcome.

A growing range of activities can be offloaded to AI, in many assessments we require students to engage with these activities, but they are not related to the purpose of the assessment. For example, in many written tasks: grammar, spelling and written expression are required to be effective but are not the learning outcomes assessed. We should expect a higher standard in these areas for students to pass, but these criteria should not be the deciding factors if a students’ work should be recognised with a mark between a 6 or a 7.

Implications for practice

  • Have clarity of the key learning outcomes students must acquire to focus attention and support in key areas that cannot be compromised.
  • Free up time to provided targeted support and guidance.

Staff need to have current knowledge of AI and access to AI tools

Principle: Staff designing and marking assessment must understand AI affordances and limitations and regularly review rubrics to ensure criteria remain fit for purpose.

Implications for practice

  • Review your assessment and rubrics each semester – consider adding a standing agenda item to course review meetings.
  • Moderation checklist – how is the assessment being impacted by AI?
Last updated: