Go to top of page

Assessment insights

12 September 2018

Introduction

In July 2016, TEQSA released a consultation paper on proposed extensions to TEQSA’s external reporting program, seeking submissions in relation to proposals for future reporting, particularly in relation to assessment outcomes and compliance with the Higher Education Standards Framework (the Standards).

Submissions were broadly supportive of high level analysis of areas of the Standards in relation to which issues were regularly encountered and of risk assessment outcomes and their relationship to the outcomes of assessments. In its summary report on the consultation process, TEQSA indicated it would subsequently publish a report on assessment outcomes. It is now timely, given the change to the 2015 Threshold Standards for applications submitted from 1 January 2017, to reflect on outcomes of applications submitted up to 31 December 2016.

This report provides an overview of assessment outcomes, organised by five themes: 

  • assessment outcomes by year and application type
  • prevalence of particular sets of issues leading to adverse assessments
  • differences in assessment outcomes by provider type
  • the time TEQSA takes to complete assessments 
  • the relationship between risk assessments and regulatory outcomes. 

The provider categories used in the analysis are, as proposed in the consultation paper1:

  • universities
  • higher education providers — for-profit
  • higher education providers — not-for-profit (divided by TAFE, faith-based and ‘other’ providers).

A range of regulatory outcomes are characterised as ‘adverse’ in this paper. Around 25 per cent of adverse decisions are outright rejections of an application — the remainder involve some combination of conditions and reduced period of provider registration or course accreditation. This means that, in around 75 per cent of adverse cases, TEQSA has approved the application with some form of sanction. This approach gives providers notice that TEQSA considers some form of improvement to be necessary, while allowing reasonable opportunity for providers to make improvements.

Methodology

Assessments included in this report include all course accreditation, course re-accreditation, provider registration and provider renewal of registration assessments that led to final decision being made in the years 2013 to 2017. Where an assessment is referenced in relation to a year, that year is the year of decision. Throughout the report, a ‘provider assessment’ refers to a provider registration or renewal of registration assessment and a ‘course assessment’ refers to a course accreditation or renewal of accreditation assessment.

All of the assessments described in this report were conducted in relation to the 2011 Threshold Standards which, although not applied to new assessments from January 1 2017, remained applicable to any assessments conducted on applications received prior to that date, including those finalised in 2017.

The analysis provides an account of past assessment outcomes completed under the Higher Education Standards Framework 2011, in accordance with TEQSA assessment practices current at the time each assessment was made.

Throughout the paper, a reference to an ‘adverse’ decision, unless otherwise specified, means a decision to reject an application, to approve an application for less than seven years, or to approve an application with conditions. Consequently, the meaning of ‘adverse’ encompasses a range of severity or sanctions, and so should not be understood as indicating that a provider was found actually non-compliant. Application of a condition or a shortened period of registration or accreditation has in general occurred where a provider has been found compliant but with a substantial risk of non-compliance.

The data underpinning analysis include regulatory assessment outcomes from 2013 to 2017, and risk assessments from 2014 to 2017. In this paper, reference to risk rating means the rating of a provider in terms of overall risk to students (i.e. not taking into account financial risk.)

Assessment decision times are calculated from the substantive assessment start date (complete application submitted and fee paid) where available, or otherwise from the application received date, to the date of first decision by the TEQSA Commission or its delegate. Decision times associated with any subsequent internal or external reviews are not included.

In general, comparisons are made in terms of proportions rather than counts, as the differences in numbers of providers of each type and risk category means comparisons of counts can be misleading. Counts are provided where this may assist interpretation.

In some sections, specific groupings of assessments or providers are excluded from analysis. For example, where analysis relates to risk assessments, only assessments decided from 2014 are included, as 2014 was the first year of TEQSA’s current risk framework. Any other exclusions will be noted explicitly — otherwise, all providers and all assessments are included in each analysis.

Findings and discussion

Overview

  • There was a major increase in the number of initial registration applications received by TEQSA in 2016 and 2017. However, in spite of the increase in applications, the number of approvals in 2017 remained consistent with yearly approval numbers since TEQSA’s inception.
  • Issues referenced as the reasons for adverse decisions were focused, in registrations and re-registrations, on corporate and academic governance and on human resources and management. In course accreditations and re-accreditations, adverse decisions were focused on assessment, course design and teaching and learning. TEQSA has published guidance notes relevant to each of these areas.
  • Up to the end of 2017, no university had been subject to an adverse TEQSA decision. Of the remaining provider groups, for-profit providers were most likely to be subject to adverse decisions, followed by TAFE and not-for-profit faith based, with providers in the not-for-profit (other) category least likely to be subject to an adverse decision. Differences were more clear in relation to registrations and renewals of registrations than accreditations and renewals of accreditation.
  • The time that TEQSA takes to reach a decision is associated with the outcome of an assessment—adverse decisions take longer to make.
  • While there is a close association between risk ratings and regulatory outcomes, the relationship is not determinative—providers assessed as ‘high risk’ will not necessarily be subject to adverse decisions, and providers assessed as ‘low risk’ will receive adverse decisions on applications where evidence indicates non-compliance or a substantial risk of non-compliance.

Assessment Insights infographic showing median days to decision, main reasons for adverse outcomes and the number of initial registration approvals per year 2015 to 2017

Assessments by year

Existing providers

The proportion of TEQSA’s assessments of existing providers that led to an adverse decision was closely related to the extent to which these assessments were of providers with a high or low risk rating. However, because the proportions of assessments relating to high or low risk rated providers has varied over time, this relationship could lead to the incorrect interpretation of trends, or otherwise of volatility, in relation to TEQSA’s decision making considered over time. It should be noted that TEQSA does not make regulatory decisions on the basis of a risk rating. While a provider’s risk rating does inform TEQSA’s initial scoping of an assessment, and may inform the level of attention TEQSA gives a provider, decisions on applications submitted by providers are made on the basis of evidence that providers do or do not meet the Standards.

Risk balance

Figure 1 shows the proportion of adverse (re-registration, accreditation and re-accreditation) decisions on applications by existing providers in half-yearly periods from 2014 through 2017, along with a proportional measure we have termed ‘risk balance’. We give the name risk balance to a measure of the extent to which the assessments in a given period are concentrated on high risk or low risk rated providers. The risk balance measure can take a value between zero and one and is calculated by dividing the number of assessments of high risk providers by the sum of assessments of low and high risk providers. Because the extent of moderate risk assessments is excluded from the calculation, risk balance can be understood as the extent to which high or low risk dominates at the extremes of risk. A value of zero would mean that no assessments in a period were of high risk providers and a value of one would mean that no assessments in a period were of low risk providers. 

Figure 1 shows that assessment risk balance and the proportion of adverse assessments in a given half-year period are closely related. It also indicates that an analysis of assessment outcomes over time that does not account for risk may lead to significant misinterpretation.

Figure 1. Risk balance proportion and adverse proportion by half-yearly period

Figure 1. Line graph showing risk balance proportion and adverse proportion by half-yearly period 2014 to 2017

The close relationship between risk balance and outcome over 2014 to 2017 is demonstrated more clearly in Figure 2, which is a scatterplot showing the proportion of adverse assessments decided in each half-year against the risk balance in that same period, along with a simple linear regression trend line. This shows a very close relationship between assessment risk balance and adverse assessment proportion, as risk balance alone accounts for 94 per cent of variation in adverse assessment proportion.

Figure 2. Proportion of adverse decisions on existing provider applications by risk balance

Figure 2. Line graph showing proportion of adverse decisions on existing provider applications by risk balance

In general, since the introduction of TEQSA’s current risk framework in 2014, the proportion of TEQSA assessments leading to adverse decisions in a given half-yearly period has tracked very closely to the balance of risk relating to those assessments over the same period. This means that, having accounted for risk, the proportion of TEQSA’s adverse decisions relating to existing providers has remained largely constant over the period 2014 to 2017.

Approvals of new provider registrations have remained roughly consistent in number, at between four and seven per year, over 2014 to 20172. However, a major increase in applications in 2016 means that the proportion of finalised applications that were not approved substantially increased in 2017. For example, while four of the five initial registration assessments that concluded in 2014 (80 per cent), five of seven in 2015 (71 per cent), and seven of nine in 2016 (78 per cent) resulted in a new provider being registered, in 2017 only four of 28 finalised initial registration assessments (17 per cent) were approved3. Figure 3 plots the number of applications that reached substantive assessment stage that were finalised with or without approval over the years 2014 to 2017. It shows that while the number of applications finalised increased substantially in 2017, there was no increase in approved registrations.

Prospective providers

Figure 3. Number of substantive initial registration assessments approved and not approved/withdrawn

Figure 3. Number of substantive initial registration assessments approved and not approved/withdrawn

The issues that have predominated in the more recent suite of initial provider registration applications include:

  • poorly designed corporate and academic governance structures, and governing body members without experience in higher education governance
  • weaknesses in financial evidence — for example, unrealistic student projections and expectations of ready receipt of government funding
  • plagiarism identified in course outlines
  • little evidence of capability in development or implementation of higher education quality assurance frameworks
  • little evidence of capability in risk management, strategic planning, scholarship and scholarly activities in a higher education context
  • a lack of academic leadership, and proposed staff with inadequate higher education teaching experience.

While this suite of issues led to compliance failures of prospective entrants across a broad range of Standards, it is notable that all applicants formally rejected for initial registration were found not to comply with at least one, but usually more than one, standard relating to corporate and academic governance. After governance, the most common issues related to financial sustainability and viability and to academic leadership and staffing.

It is important to note that TEQSA publishes a range of guidance notes to provide greater clarity for providers in the interpretation and application of standards. Separate guidance notes covering academic and corporate governance, course design, financial assessment, scholarship and staffing, learning resources and education support are available and are particularly relevant in relation to the issues identified in this report.

Prevalence of issues

TEQSA has categorised its adverse assessment decisions in relation to the Standards that were found not to have been met, or at risk of not being met, and that led to an adverse outcome, whether outright rejection of an application, approval of an application for a period shorter than seven years, or approved with conditions.

Figure 4 shows the proportion of adverse registration and renewal of registration assessments that led to sanctions relating to each of the seven 2011 Provider Registration Standards (PRS) sections. It shows very clearly that when TEQSA reached adverse findings on assessments, these tended to be in relation to particular kinds of issues—corporate and academic governance (PRS 3), and management and human resources (PRS 5). Note that an adverse decision may relate to a number of PRS sections, and each standard referenced has been included in the data informing Figure 4.

Figure 4. Proportion of adverse re/registration decisions referencing registration standard sections as a reason for an adverse finding (2013-2017)

Figure 4. Bar chart showing proportion of adverse re/registration decisions referencing registration standard sections as a reason for an adverse finding (2013-2017)

PRS 3 comprises eight standards relating to a range of governance-related requirements, including board responsibilities and composition, risk management, delegations and quality assurance, among other things. However, the issues identified within providers were not spread evenly across the range of standards under PRS 3. For example, of the registration assessments resulting in sanctions related to PRS 3, more than 80 per cent of these included a reference to PRS 3.8, a standard that relates primarily to a provider’s ability to quality assure itself through its corporate and academic governance arrangements. The next most prevalent governance-related standard was PRS 3.4, relating to risk management, with around 30 per cent of governance-related sanctions referencing this standard. Standards under PRS 5 are more closely related to each other and the spread of issues under PRS 5 was more evenly distributed across its six standards, with a general focus on academic staffing and workforce planning issues.

One reason why corporate and academic governance, and management and human resources, tended to dominate issues found in re/registration assessments is that these have been areas of focus for TEQSA—forming part of the core of TEQSA’s ‘Core +’ approach to assessments4 —and as a result have determined the kinds of evidence requested of providers and the direction of attention in assessments.

Academic governance, and management and human resources, are particularly important for the effective functioning of a higher education provider, as issues in these areas tend to lead to issues in other areas. When providers have effective governance and management, and high quality staff, issues in relation to the other standards categories may still arise. However, internal quality assurance arrangements are more likely to capture these issues and lead to remediation without TEQSA’s input.

Figure 5 shows the proportion of adverse course accreditation and renewal of accreditation assessments that led to sanctions relating to each of the six 2011 Provider Course Accreditation Standards (PCAS) sections. It shows a concentration of issues around assessment, course design, and teaching and learning (i.e. academic staffing).

Figure 5. Proportion of adverse re/accreditation decisions referencing accreditation standard sections as a reason for an adverse finding (2013-2017)

Figure 5. Bar chart showing proportion of adverse re/accreditation decisions referencing accreditation standard sections as a reason for an adverse finding (2013-2017)

The design of a course, who teaches the course and how students’ work is assessed are clearly matters central to successful course delivery, so it ought not to be surprising that issues relating to these areas were the major causes of sanctions being applied to course accreditations.

TEQSA has published guidance notes focused on matters that have commonly led to adverse decisions on applications, including, though not limited to: academic governance; corporate governance; staffing, learning resources and educational support; academic integrity; and course design. These guidance notes are intended to provide advice and greater clarity when interpreting and applying selected areas of the Standards. While they should not be read as sets of instructions, providers familiar with the content of the guidance notes will have a better understanding of how TEQSA views a range of matters central to compliance with the Standards.

Assessment outcomes and provider types

One way to compare assessment outcomes for different provider categories is to compare the numbers and proportions of assessments leading to adverse findings for each category of provider. Table 1 shows counts of registration/re-registration assessments and for course accreditation/re-accreditation assessments, and Figure 6 shows these as proportions.

Table 1. Number of assessments resulting in adverse, by provider type (2013-2017)

 

Registration and re-registration total

Registration and re-registration adverse

Course assessment total

Course assessment adverse

For-profit

86

58

669

257

TAFE

15

6

195

52

Not-for-profit (faith-based)

21

6

269

56

Not-for-profit (other)

33

8

129

22

University

29

0

0

0

Figure 6. Proportion of assessments resulting in adverse, by provider type (2013-2017)

Figure 6. Bar chart showing proportion of assessments resulting in adverse, by provider type (2013-2017)

Figure 6 shows most clearly that for the period 2013-17: 1) for-profit providers have had a substantially higher proportion of adverse findings compared to other provider types, particularly in relation to new registrations and renewals of registration, and 2) universities have had no adverse assessments.

The higher proportion of for-profit providers with adverse assessments is likely to have been associated with the risk profile of that group5. For example, in 2017 only 13 per cent of for-profit providers were classified as presenting a low risk to students while 54 per cent were classified as high risk. In contrast, 40 per cent of not-for-profit higher education providers were classified as a low risk to students in the same year, and 11 per cent were classified as high risk. Notably, only one adverse assessment of a for-profit provider was of a provider classified as presenting a low risk to students. The final section of this paper considers the relationship between risk to students and assessment outcomes.

For universities, it is necessarily the case that they have no adverse course assessments — because all universities have self-accrediting authority, TEQSA does not assess university courses for accreditation. However, TEQSA has assessed universities for re-registration, and over the period considered in this analysis, these assessments did not lead to any adverse decisions. Although there is variation within the group, on quantitative measures universities were as a whole the lowest risk category of providers in Australian higher education, and this was reflected in the outcomes of TEQSA’s assessments of universities during the period.

Time to completion by outcome and provider type

Assessments that lead to adverse decisions tended, for a range of reasons, to require more work and deliberation, from a greater number of people, to complete. As a result, the time TEQSA has taken to make a decision on an application has differed substantially depending upon the assessment outcome. Table 2 provides summary statistics (median, first quartile and third quartile) of decision time for each category of assessment, while Figure 7 shows the median time in days to decision (not including any subsequent external reviews of adverse decisions) for assessment of providers (registration and re-registration) and for assessment of courses (accreditation and re-accreditation)6.

Table 2. Summary statistics—days to decision by assessment type and assessment outcome
 

 

 

Days to decision

Assessment type

Outcome

Quartile 1

Median

Quartile 3

Course accreditation and re-accreditation

Approved

132

210

286

Approved (adverse)

260

274

458

Not approved

274

352

498

Provider registration and re-registration

Approved

157

217

295

Approved (adverse)

269

344

437

Not approved

337

417

488

Figure 7. Median number of days to decision by assessment type and outcome (2013-2017)

Figure 7. Bar chart showing median number of days to decision by assessment type and outcome (2013-2017)

While overall assessment times were at least in part likely to be a function of resourcing, a key differentiator between shorter and longer assessment times, given a particular resourcing profile, was the eventual outcome of the application. Figure 7 shows clearly TEQSA’s longer median decision times for assessments that led to adverse decisions for both assessments of providers and assessments of courses.

Much of the difference in assessment times between adverse and non-adverse assessments was associated with the period after case teams completed their initial assessment and where, in the case of proposed adverse assessments, providers were afforded an opportunity to respond to the draft recommendation before a final decision is made7. This was because the time that elapsed between when TEQSA sent a draft decision to a provider and when TEQSA made a final decision included an allocation of time for the applicant to formulate a response, as well as the time taken by TEQSA to analyse any new material submitted by the applicant in response, and subsequently to finalise the recommendation. 

In addition to time added to an assessment that is associated with an adverse outcome, evidence from TEQSA’s case managers suggests that application presentation, coherence, relevance and parsimony also contributed to shorter assessment times, as did prompt responses by providers to requests for additional information.

Assessment outcomes and risk ratings

Provider risk ratings have influenced assessments in a number of ways—in particular, they have helped guide the initial scope of standards against which assessments have been conducted, and helped establish the depth of evidence that was required in relation to those standards. This meant that providers with higher risk ratings were more likely to be subject to more comprehensive assessments, conducted on the basis of larger sets of evidence.
The closer attention that TEQSA paid to high risk providers meant that, where there was a compliance issue, it was more likely to be found. However, TEQSA does not make assessment decisions on the basis of risk—adverse decisions of any kind are made on the basis of evidence of a provider not meeting, or of being at substantial risk in the future of not meeting, the Standards.

Individual assessments and risk ratings

Figure 8 shows the proportion of assessments that led to adverse re-registrations and accreditation/re-accreditation decisions8, separated by overall risk to students as assessed for that provider for the year the decision was made. The chart excludes universities because, as no university received an adverse finding in the period analysed, the inclusion of universities would tend to de-emphasise adverse decisions for other low-risk providers. Providers for which TEQSA had no confidence in data (NCID) and providers that were unrated for other reasons are grouped together. The chart makes clear that providers that were assessed as presenting a high risk to students were more frequently subject to adverse decisions than were moderate and low risk rated providers.

Figure 8. Proportion of adverse decisions by ‘risk to students’ rating (excl. universities) (2014-2017)

Figure 8. Bar chart showing proportion of adverse decisions by ‘risk to students’ rating (excl. universities) (2014-2017)

Additionally, Figure 8 shows that providers for which TEQSA had no confidence in data or were unrated were roughly as likely as high risk providers to be subject to an adverse decision. Being unable to submit quality data can reflect issues relating to governance and management in a provider. As mature governance and management practices contribute substantially to overall quality assurance within a provider, it is not surprising that providers submitting unreliable data are more likely to be subject to adverse decisions. Given this association between poor data and poor assessment outcomes, and to encourage improvement in data quality over time, in 2017 TEQSA adopted a policy that where TEQSA is unable to determine a risk rating for a provider due to the poor quality of data submitted, TEQSA will by default rate that provider as ‘high risk’.

Individual providers and risk ratings 

Whereas Figure 8 showed percentages of all assessments leading to adverse decisions, Table 3 shows the percentage of providers with a given risk to students rating with at least one adverse decision over the period 2014 to 2017, again excluding universities. Considering assessments by provider in this way corrects for the situation where a small number of providers submit a large number of course-related assessments, disproportionately influencing analysis at the assessment level.

Because providers’ overall risk to students ratings sometimes changed over the period of analysis, some providers are counted multiple times in Table 3. For example, if a provider was assessed for a renewal of registration twice within the period, and their risk rating changed in between assessments, that provider would be counted twice — once for each risk rating. However, if the provider was assessed for re-registration twice during the period but their risk rating did not change, then the provider would be counted once. The same goes for course assessments.

It is clear from Table 3 that the association between risk to students and regulatory outcome remains when the focus shifts from assessments to providers.

Table 3. Percentage of providers assessed for an application type receiving at least one associated adverse decision from 2014 to 2017, by risk to students rating

Assessment type

High risk

Moderate risk

Low risk

NCID or unrated

Provider renewal of registration

83% (19/23)

62% (13/21)

11% (4/36)

91% (10/11)

Course accreditation and renewal of accreditation

81% (25/31)

41% (19/46)

16% (7/44)

70% (37/53)

While a provider’s risk to students rating and its assessment outcomes were closely associated, this association was not deterministic—19 per cent of high risk providers with course assessments over 2014-2017 received no sanctions in relation to any of those assessments, and similarly for 17 per cent of high risk providers in relation to re-registration. Providers that appeared problematic, based purely on risk assessments, were assessed with no adverse findings where evidence suggested this was appropriate. Similarly, providers with low risk ratings were subject to adverse findings where a close assessment revealed deeper issues, or issues that were not reflected in TEQSA’s standard data collection.

Conclusion

This report includes a range of information that might help existing and prospective providers better understand how they might improve the results of their applications to TEQSA.

For prospective providers, it should be clear that, while entry to the higher education sector is not easily gained, many prospective providers would have been in a much better position had they attended more carefully to some of the more basic requirements for entry. For example, prospective providers increased the likelihood of success when they ensured that they had appropriate governance structures in place prior to application, avoided overly optimistic accounts of expected financial performance, sourced people with the necessary levels of expertise to set up a new provider, and proposed a future staff mix suitable to its operation. In general, unsuccessful prospective providers appeared not to have appreciated the level of preparation necessary to apply successfully for entry to the higher education sector.

For existing providers, this report should make clear that focusing on strengthening foundational provider attributes (such as governance and staffing) and course attributes (such as course design and assessment), should help to improve their application outcomes as well as the assessment times associated with those applications. In making these improvements, providers may also find that their risk ratings improve over time.

TEQSA endeavours to provide advice and greater clarity when interpreting and applying selected Standards through its publication of guidance notes, and publishes a range of other material to help providers understand TEQSA’s processes and its approach to regulation. Existing and prospective providers are encouraged to make use of this information and the findings presented in this report to assist them when preparing for a registration or accreditation application.

Further information

A range of information is available on TEQSA’s website about how TEQSA conducts assessments and develops provider risk ratings. For example:

 

  1. Although some submissions to TEQSA proposed additional analysis in relation to provider size, TEQSA was unable to identify a consistent association (universities excluded on account of the difference in scale swamping analysis) between student EFTSL and assessment outcomes or between student EFTSL and risk assessment outcomes.
  2. It should be noted that some of these registrations were related to changes to existing providers that meant that they were required to register as a new entity.
  3. Note that applications not approved includes applications withdrawn by the provider before decision, as with very few exceptions these withdrawals are in response to TEQSA advice that the assessment is leading towards a recommendation to reject the application for registration.
  4. More information on TEQSA’s regulatory approach.
  5. It should be noted that provider type is not a consideration in determination of risk rating—in particular, for-profit or not-for-profit status does not affect TEQSA’s assessment of a provider’s risk to students rating. In 2017, for example, the risk indicators most closely linked to an overall risk to students rating were student-to-staff ratio, the casualization rate of academic staff, and student progress and first year attrition rates.
  6. Decision times are calculated from the date substantive assessment began (where available—or otherwise from the date the application was received) to the date a decision was made by the Commission or a delegate.
  7. Providers will normally be afforded 28 days to respond to a draft recommendation.
  8. i.e. rejected, approved for less than seven years and/or approved with conditions.