The Australian Medical Council is an organisation whose work impacts across the lands of Australia and New Zealand.

The Australian Medical Council acknowledges the Aboriginal and/or Torres Strait Islander Peoples as the original Australians and the Māori People as the tangata whenua (Indigenous) Peoples of Aotearoa (New Zealand). We recognise them as the traditional custodians of knowledge for these lands.

We pay our respects to them and to their Elders, both past, present and emerging, and we recognise their enduring connection to the lands we live and work on, and honour their ongoing connection to those lands, its waters and sky.

Aboriginal and/or Torres Strait Islander people should be aware that this website may contain images, voices and names of people who have passed away.

Learn more

Standard 3


3.1 Assessment design

3.1.1      Students are assessed throughout the medical program through a documented system of assessment that is:

  • consistent with the principles of fairness, flexibility, equity, validity and reliability
  • supported by research and evaluation information evidence.

3.1.2      The system of assessment enables students to demonstrate progress towards achieving the medical program outcomes, including described professional behaviours, over the length of the program.

3.1.3      The system of assessment is blueprinted across the medical program to learning and teaching activities and to the medical program outcomes. Detailed curriculum mapping and assessment blueprinting is undertaken for each stage of the medical program.

3.1.4      The system of assessment includes a variety of assessment methods and formats which are fit for purpose.

3.1.5      The medical education provider uses validated methods of standard setting.

3.1.6      Assessment in Aboriginal and/or Torres Strait Islander and Māori health and culturally safe practice is integrated across the program and informed by Aboriginal and/or Torres Strait Islander and Māori health experts.

A ‘system of assessment’ refers to how the medical program explicitly blends separate assessments to achieve the different purposes, for a variety of stakeholders. A system of assessment should assess knowledge, skills and behaviours students are expected to learn in the medical program and use resources to address the needs of students, educators, healthcare system, patients and other stakeholders. The system should promote learning and appropriate standards. Separate assessments within the system should be supported by quality assurance processes (see Standard 3.3: Assessment quality). The system, and the separate assessments within the system, should be consistent with assessment principles and evidence-informed criteria (such as purposes-driven, acceptability, transparency, coherence, etc.) (3.1.1, 3.1.2, 3.1.3 and 3.1.4).

‘Professional behaviours’ that students should be able to demonstrate progress towards achieving through assessment include culturally safe behaviours (3.1.2).

The named principles of assessment (fairness, flexibility, equity, reliability, validity) are often interconnected and are complex concepts. Providers may choose to adopt additional principles to guide the design and implementation of their system of assessment. A few simple examples of how principles of assessment interact could include:

  • Accessibility of assessment such as approaches to reasonable adjustments/accommodations or transparency of rules/regulations/policies (fairness, equity).
  • Factors affecting reliability/precision/consistency of assessment and how that is considered during the assessment process (validity, reliability).
  • Consideration of student circumstances that may require flexibility in administration (flexibility).
  • Influence of cultural safety considerations on assessment design (equity, validity).
  • Equity of access to technology used in the assessment process (fairness, equity).
  • Defensibility of assessment decision making (validity).

‘Fit-for-purpose’ assessment methods refers to the selection of methods being appropriate to the intended learning outcomes, the learning and teaching activities, and the intended purpose of assessment (3.1.4).

That assessment in Aboriginal and/or Torres Strait Islander and Māori health and culturally safe practice is ‘integrated across the program’ refers to this assessment being embedded across curriculum areas and different medical disciplines and occurring regularly throughout the program, rather than only being assessed within stand-alone Aboriginal and/or Torres Strait Islander and Māori health components of the program and/or isolated to a few points in time (3.1.6).

Documentary evidence could include:

  • Assessment planning documents across the program, such as an assessment strategy and key assessment policies, regulations and rules, and assessment requirements.
  • The high-level assessment schedule across the program. This could include the weight of individual assessments, the approach to the extent to which performance in some assessment activities can compensate for underperformance in others, and requirements for progression (e.g. barrier/hurdle requirements).
  • Descriptions of how the governance of the program supports the system of assessment.
  • Descriptions of how assessment is resourced across the program.
  • Supporting research and evaluation evidence that demonstrate that the system and single assessments are working as intended.
  • Blueprints at system level that demonstrate how the system and single assessments align with the medical program outcomes*.
  • Blueprints at single assessment level which demonstrate alignment of curriculum to the assessment, and to learning and teaching activities in each stage of the medical program.
  • Descriptions of how the blueprints at single assessment level are made coherent with the blueprints at system level.
  • Descriptions of the validated standard-setting methods.
  • Planning and implementation documents related to Aboriginal and/or Torres Strait Islander and Māori health assessment demonstrating how assessment methodologies are informed by Aboriginal and/or Torres Strait Islander and Māori health experts and pedagogies.

Interview and observational evidence could include:

  • Discussions with staff responsible for assessment on the program’s approach to assessment.
  • Discussions with staff implementing the curriculum on how the assessment links to learning and teaching in the program.
  • Discussions with students on their experience of assessment, particularly aspects related to transparency, equity and fairness.
  • Discussions with Aboriginal and/or Torres Strait Islander and Māori staff and experts on the integration and involvement of experts in Aboriginal and/or Torres Strait Islander and Māori health assessment development and implementation.
  • Observation of key assessment activities.

An example from Bond University, Faculty of Health Science and Medicine

Phase 2 (YR4-5) of the Medical Program underwent a major change in structure in 2023, where MD students will now progress at the end of each Subject/Semester, rather than at the end of each year.  This ‘Semesterisation’ allows greater flexibility in use of student placement in the 2-year MD journey, gives students the opportunity to personalise their MD learning journey and accommodates two cohorts of students into the MD at different points in the calendar year.  It also allows students who fail a Subject to repeat the component that caused them to fail, rather than repeat a whole year of content.

How does this fit in the broader system of assessment?

The MD Program Blueprint (Outcome 3.1.3) details the assessment journey of a medical student as they progress through the 5 year program.  It details the transition from Phase 1 block-based teaching and assessment with a focus on assignments and exams for score and grades through to an Ungraded pass/fail competency model as students enter Phase 2.  This two-year clinical apprenticeship has a focus on evaluation of multiple Workplace based assessments, clinical performance in OSCE and multiple, longitudinal lower stakes tests of intern-level knowledge known as Progress Tests (Outcome 3.1.4). Students repeating one semester for failure of a domain of competency rather than the whole year is determined to be more consistent with the principles of fairness (Outcome 3.1.1) and students are given a carefully curated, mentored and monitored repeat clinical experience, designed to meet their identified individual learning needs to support them to achieve their academic potential.  Evaluation of Repeat subject pass/fail rates will be monitored to ensure this strategy is sound.

What led to this change?

To meet the growing needs of the Australian Health System for interns, Bond Medical Program has had two intakes of students since 2020, entering in the May and September semesters. The 2023 change to Rules of Assessment and Progression from year-long Subjects to progression at the end of each semester, aims to provide both cohorts with an equitable clinical learning experience but to take advantage of the two entry points to Phase 2, allowing students to do an Honours Subject or take a leave of absence for managing life experiences such as giving birth, managing illness or carers leave without significant time penalty. The Medical Program has simultaneously expanded the placements offering allowing students increased ability to personalise and enrich their medical journey with choice of placements, domestically and internationally. Equity of student experience in Phase 2 (Outcome 3.1.1) whilst providing this flexibility is supported by regular communication to clinical sites and Leads via Joint Placements meetings and the sharing of best practice via Clinical Advisory Board meetings. The student experience is monitored via Clinician Advisory Board meetings, TEVALs and Clinical Placement evaluation surveys (Evaluation outcome).

Contact – A/Prof Carmel Tepper
Academic Assessment Lead

3.2 Assessment feedback

3.2.1      Opportunities for students to seek, discuss and be provided with feedback on their performance are regular, timely, clearly outlined and serve to guide student learning.

3.2.2      Students who are not performing to the expected level are identified and provided with support and performance improvement programs in a timely manner.

3.2.3      The medical education provider gives feedback to academic staff and clinical supervisors on student cohort performance.

Feedback that is ‘clearly outlined’ and ‘serve[s] to guide student learning’ should be transparent and related to specific learning outcomes and their component objectives.

‘Performance improvement programs’ refer to a formal process to assist students who are experiencing difficulties to improve their performance, with a focus on early identification, provision of feedback and support. Providers should recognise in the design of performance improvement programs that multiple factors can impact on performance, including individual skills, wellbeing and the work environment. All these factors should be considered and addressed in a performance improvement program (3.2.2).

Providers should provide regular and actionable feedback to academic staff and clinical supervisors on student cohort performance (3.2.3). The process and outcomes of providing this feedback should be explained under Standard 3.2: Assessment feedback. The communication of more formal evaluation and continuous improvement to stakeholders, including internal stakeholders like academic staff and clinical supervisors, should be explained under Standard 6.3: Feedback and reporting.

Documentary evidence could include:

  • Descriptions of processes that determine how feedback is sought by, discussed with and provided to students, including any policies that support feedback processes.
  • Case studies of feedback to students that show how feedback is regular, timely and clearly outlined, students are able to take action of that feedback and, therefore the feedback serves to guide student learning.
  • Sample feedback forms and feedback rubrics, accompanied with descriptions of how these are used in practice.
  • Description of student input/engagement on developing feedback reports.
  • Agendas and minutes from meetings of education-related committees and/or assessment committees that demonstrate how feedback issues are addressed.
  • Descriptions of how students who are not performing to the expected level are identified and provided with support and performance improvement or learning programs, and the strategies/policies that support these performance improvement programs.
  • Example of individualised performance improvement programs.
  • Descriptions of the mechanisms to provide feedback to supervisors and students on student cohort performance.

Interview and observational evidence could include

  • Discussions with students on how they seek and are provided with feedback; and how performance improvement programs function.
  • Discussions with academic staff and clinical supervisors on how they approach feedback and performance improvement programs; and the feedback they receive from the provider on student cohort performance.

There are no examples at this time.

There are no resources at this time.

3.3 Assessment quality

3.3.1      The medical education provider regularly reviews its system of assessment, including assessment policies and practices such as blueprinting and standard setting, to evaluate the fairness, flexibility, equity, validity, reliability and fitness for purpose of the system. To do this, the provider employs a range of review methods using both quantitative and qualitative data.

3.3.2      Assessment practices and processes that may differ across teaching sites but address the same learning outcomes, are based on consistent expectations and result in comparable student assessment burdens.

The AMC does not specify how regularly providers should review their systems of assessment, but the frequency of review should be supported by evidence and maintain the continued fitness for purpose underpinned by the key principles outlined in Standard 3.1: Assessment design. The ‘range of review methods’ providers employ may, depending on the mix of assessments used, include psychometric analyses, benchmarking or calibration analyses, analyses of passing and attrition rates across the program, feedback from staff, and feedback from students (e.g. via surveys or other mechanisms) (3.3.1).

That assessment ‘may differ’ across teaching sites refers to the provider having the discretion to implement a mix of common and site-specific assessments that would be appropriate for the program, with attention to the implications of doing so (3.3.2).

To ensure assessment practices and processes are ‘based on consistent expectations’, the provider should, depending on the assessment method, provide marking rubrics, engage in activities that support examiner and assessor consistency in assessment methods that incorporate standardised elements, adopt appropriate standard setting, and incorporate benchmarking/calibration activities across sites (3.3.2).

‘Student assessment burdens’ refers to the amount of time spent preparing for, traveling to and undertaking assessments. Providers should consider how to manage these burdens in cases where groups of students may have a higher burden, such as for students who travel to undertake an assessment (3.3.2).

Documentary evidence could include:

  • Descriptions of the review process applied to the system of assessment, including how the provider evaluates fairness, flexibility, equity, validity, reliability and fitness for purpose, and the regularity of the review.
  • Reports on the design/ review of the system of assessment.
  • Agendas and minutes from meetings of governance committees that relate to assessment.
  • Student surveys/questionnaires on student experience of assessment and analysis of feedback related to assessment.
  • Case studies of how changes to the system of assessment have emerged from review processes.
  • Analyses of assessment outcomes across education sites.
  • Assessment rubrics.
  • Assessor training session materials.
  • Blueprints of separate assessments.
  • Curriculum-assessment blueprints demonstrating alignment of learning outcomes, learning objectives and sampling strategies

Interview and observational evidence could include:

  • Discussions with academic staff responsible for assessment about review processes for assessment, training sessions etc.
  • Discussions with students in different sites about their relative assessment experiences and assessment burdens.

An example from University of Melbourne, Melbourne Medical School

Our approach to maximising reliability and validity of our assessments includes selection of appropriate and authentic assessment tasks, constructive alignment with teaching and learning activities, (including blueprinting to reflect teaching emphasis), standardisation of assessments, and use of sampling where standardisation is not appropriate. Our approach involves expert item writers and team-based assessment development processes, involvement in benchmarking activities, ongoing staff development opportunities, rigorous standard setting procedures, detailed post-test psychometric analysis and extensive evaluation processes.

Our Clinical Assessment Review Panel (CARP) meets fortnightly throughout the academic year to develop and review the OSCE and SCBD stations used throughout the program. Membership of this group includes members of the assessment team, subject coordinators, discipline leads and teaching staff from clinical school sites. Our Written Assessment Review Panel (WARP) meets weekly throughout the year to produce, critique and review our new written assessment items. Membership includes assessment team members, subject coordinators, discipline leads and staff closely involved in clinical teaching delivery at multiple sites. Likewise, we have Situational Judgment Test review panel – who develop and refine our SJT items in collaboration with our professional practice team and year level and subject coordinators. The medical course continues to benchmark its students’ performances nationally across all years of the MD through engagement with AMSAC, MDANZ and AMC benchmarking projects.

Staff development opportunities include short courses, workshops in assessment item development, online assessor training modules, and formal study in assessment through the Excellence in Clinical Teaching Program (EXCITE). We regularly offer item writing workshops to promote skill development without our Department and invite attendance from members of Faculty who contribute to our teaching and assessment program. We have an online assessor training modules (for OSCE, CEX and SCBD assessments) to assist with the training and calibration of examiners of clinical assessments. These modules include simulated videos of typical student performance at borderline and clear pass levels for all clinical examination formats. Examiners are required to view and score the performances prior to the formal examination in addition to the just-in-time assessor training on the morning of each assessment delivery.

The Evaluation Team (in consultation with the assessment team) prepares assessment reports for the Board of Examiners meetings at the completion of each subject. These reports provide reliability coefficients and compare variability within items/stations and across years and then submitted to the MD Operations Committee. Following committee review, the reports are circulated broadly to support ongoing staff development and quality improvement.

There are no resources at this time.

Was this information helpful?