The ASPIRE for quality is an evidence-based tool developed by the International Centre for Allied Health Evidence to evaluate clinical service performance in South Australian Local Health Networks. 

The development, implementation and pilot evaluation of the ASPIRE was funded by the Allied Health and Scientific Office, Department of Health, South Australia.

For more information on ASPIRE email:

What is ASPIRE?

Allied health clinical performance evaluation should be underpinned by processes that are based on research, and with an understanding of the perspectives of different stakeholders (i.e. allied health practitioners, managers/directors, consumers). It should be reinforced by a long-term vision to improve overall health outcomes, health service delivery, allied health workforce, and healthcare utilisation and cost.

The ASPIRE for quality framework is developed to assist allied health practitioners to evaluate their clinical service performance as a means for improving the quality of allied health services. This framework is based on a systematic review of the literature on performance evaluation systems, layered with a local snapshot of current practice in performance evaluation in South Australian health networks.  

The ASPIRE model captures the core elements of performance evaluation which include prioritisation of clinical area for evaluation, upfront articulation of goals, careful identification of performance measures, mapping of measures to information sources, analysis of performance data and reporting of results, and evaluation of the performance evaluation system. The implementation of an effective performance evaluation, however, is hindered by interplay of factors which primarily include lack of time, limited resources and lack of understanding of the process of evaluation. Therefore, in recognition of these barriers, ASPIRE utilises a collaborative approach between allied health practitioners and experienced researchers who are skilled in providing evaluation training and also in undertaking performance evaluation. The ASPIRE model divides the core tasks between researchers and allied health practitioners from the health site, as outlined in Table 1. The researchers will provide strong initial support and guidance which gradually reduces to enable practitioners to establish and maintain independence and promote a sense of ownership of the performance evaluation system. The ASPIRE model is ideal in building capacity that can increase the likelihood of allied health practitioners conducting performance evaluation in the future. 

Table 1: ASPIRE for quality model

Table 1 of ASPIRE

***Tasks in blue text boxes are responsibilities of allied health practitioners, and those in red are shared responsibilities of researchers and practitioners


How to operationalise the ASPIRE tool

Integral to the ASPIRE model is the formation of an evaluation team from the allied health site prior to undertaking performance evaluation. The team may include any of the following personnel: manager, senior staff, an independent accreditor or human resource personnel. Each team member must understand and be clear on what will be measured and what the foundations for evaluation are. The team should actively contribute to, and agree on the processes, methods and tools for evaluation and each member should be clear on their roles and responsibilities. The evaluation team works closely with the researchers and regular meetings should be organised to monitor the process.

The ASPIRE for quality framework includes a set of tools which allied health practitioners (i.e. evaluation team) can use to facilitate the process of evaluation.

The following outlines the steps involved in ASPIRE.

ASPIRE A: area for evaluation

The evaluation team from the allied health site identifies a clinical area for performance evaluation. The following questions/checklist may be used as a guide in determining a priority area for evaluation.

  • Is it important and relevant to the group for which the performance measurement system is being produced?
  • Is it problem-prone and with high frequency of occurrence, or is it suspected of overuse, underuse, or misuse?
  • Does it have strong financial impact?
  • Does it have potential to improve health care delivery and outcomes?
  • Has it recently undergone major changes?
  • Does it have proven and significant variation in quality of service among health care providers?
  • Is it considered high risk for patients?

ASPIRE S: set goals for evaluation

The goal for performance evaluation should be clearly articulated by the evaluation team before the measurement process is commenced. It is typically targeted to improve more than one of the following domains: acceptability, accessibility, appropriateness, care environment and amenities, continuity, competence or capability, effectiveness, improving health or clinical focus, expenditure or cost, efficiency, equity, governance, patient-centeredness, safety, sustainability, timeliness, and utilisation of care.

ASPIRE P: Performance indicators

The evaluation team and researchers work collaboratively to identify performance indicators. A performance measure or indicator is used to assess a health care structure, process or an outcome. Structure measures evaluate the means and resources used by the health system to deliver allied health services. Process measures assess what the allied health practitioner did for the patient and how well it was done. Outcome measures examine the change in patients’ health status which can be attributed to the effectiveness of the treatment. 

Performance measures are based on standards of care, which can be evidence-based or, in the absence of scientific evidence, determined by an expert panel of health practitioners. They must be comprehensible, valid, reliable, reproducible, discriminative and easy to use. Performance evaluation typically involves multiple measures, rather than a single performance measure, in order to obtain a comprehensive assessment of performance. 

ASPIRE I: Information sources

The evaluation team reflects on the data they need to collect to measure structure, process or outcomes. Common sources of information or performance data are medical records, administrative data, and patient surveys. There may also be other information systems that can be sourced to obtain information such as incident reporting systems, documentation on clinical and professional supervision, and staff feedback. Additional data collection may be undertaken if required. The evaluation team should map the identified performance measures to the information sources. 

ASPIRE R: Report results

The evaluation team and researchers collaboratively analyse the data obtained from various information systems. Descriptive and inferential statistics may be required for quantitative data and thematic analysis for qualitative data. Presentation of results and findings should be concise, easy to understand and tailored to the needs of the stakeholders (e.g. clients, allied health practitioners, managers, human resources department). Performance evaluation reports typically include a combination of text, tables and graphs or charts.

ASPIRE E: Evaluate the performance evaluation process and its outcomes

The evaluation team and researchers work collaboratively to evaluate the performance evaluation process and its outcomes. The evaluation will focus on the following: practice changes that have occurred as a result of the evaluation, how well the new model is accepted by staff and management, the extent to which the model has improved the quality of health care (i.e. health impact and quality of service), and what improvements can be made to the evaluation system that can facilitate its effective and sustainable uptake by allied health practitioners.


Development of the ASPIRE tool (Underpinning evidence)

The ASPIRE framework is based on a systematic review of the literature on performance evaluation systems, layered with a local snapshot of current practice in performance evaluation in South Australian health networks.  Figure 1 describes the approach.

Figure 1.Developmental process for ASPIRE tool

Figure 1: Developmental process for ASPIRE tool

Key findings from the systematic review:

  • Performance measurement is an integral part of health care. Its primary aim is to measure the quality of health services, the ultimate goal of which is to improve health outcomes by stimulating improvements in health care. In addition to improving quality, there are other reasons for undertaking performance measurement and these are dependent on the perspectives of different stakeholders. From a practitioner perspective, performance measurement serves as a medium to provide feedback that could validate a clinician’s performance or trigger corrective actions if poor performance is demonstrated. From a consumer perspective, performance measurement allows clients to participate in the care delivered to them. From an organisational perspective, performance measurement assists in meeting accreditation standards and facilitates human resources decisions. From a national level, performance measurement serves an important role in policy making
  • There is no one-size-fits-all approach for performance measurement that can be recommended to all allied health care settings. There are, however, essential elements and key steps involved in performance measurement which include: prioritising clinical areas for measurement, setting goals, selecting the unit of measurement analysis, selecting performance measures, identifying sources of feedback, undertaking performance measurement, and reporting the results to relevant stakeholders.
  • Performance measurement is multi-dimensional. It requires information or data from more than one perspective to provide a rich assessment of performance. Fundamental to an effective performance measurement system is the establishment of strategic measurement goals which should be aligned to the practice or organisation’s values and strategies. It should involve measurements from multi and interrelated perspectives, depending on the level of health care system involved in the analysis. A multi-method strategy that captures various perspectives is therefore required in order to gather a comprehensive picture of health care quality and performance.
  • The quality of performance and clinical care is generally assessed using structure, process and outcome measures. Selection of appropriate measures, also known as quality indicators, is dependent on the level of health system being evaluated. Comprehensibility, validity, reliability, reproducibility, discrimination, and ease of use are quality requirements that guide the choice of performance measures. A variety of tools or methods to assess performance or quality of care have been reported in the literature.
  • While performance measurement is imperative to improving health care quality, there are barriers and challenges to its implementation, which include cost and time constraints, and conflicts between health care staff and their superiors. These need to be considered for effective uptake and sustainability of the performance measurement system.

Key findings from the survey:

  • The majority of survey respondents reported undertaking regular performance evaluation in their department. The primary drivers for conducting evaluation are related to improving the quality of health services, identifying areas for professional development, and meeting accreditation standards. Local practices are generally based on widely accepted methods and principles. 
  • Key areas for performance evaluation across professions include structure, process and outcomes. However, there is variability in the focus of evaluation for individual professions. Occupational therapists consider structure, process and outcomes equally important in performance evaluation. Physiotherapists deem process measures as the most important followed by outcome measures then structure measures. Social workers and psychologists believe that process and outcomes are equally considered in evaluation and focus less on structure measures. Radiographers, podiatrists and speech pathologists consider process measures more than outcome and structure measures. For the orthotist respondent, only outcomes are regarded as performance measures.
  • Performance evaluation is undertaken using a variety of approaches which mainly include self-appraisal, direct observation of staff, and survey of patient satisfaction. Other methods such as ‘standards’ (i.e. audit), peer review, critical incident reporting are also popular in allied health evaluation. Less commonly used methods are interview, outcome measures, and chart stimulated recall. 
  • While all respondents value the importance of performance evaluation, the majority reported various challenges associated with the process. These include lack of time, lack of understanding of the process, limited funding, lack of managerial interest, personality differences, and lack of a standard framework to undertake the process. When asked about strategies that can potentially address these barriers, allied health managers believed that a formal process for evaluation and training provided to evaluators would be useful. Support from an external evaluator or allocating a position dedicated to performance evaluation were also identified as potential strategies.
  • Outcomes of performance evaluation for most professions lead to identification of professional development needs, identification of service gaps, determination of resource or funding allocation, and improvements in clinical services.

Pilot evaluation of ASPIRE

The ASPIRE framework was evaluated for its feasibility (i.e. acceptability, usefulness, appropriateness) in allied health practice evaluation. Three sites, which represented a metropolitan rehabilitation hospital, a metropolitan acute tertiary hospital and a regional general hospital, participated in the pilot evaluation of the ASPIRE. Clinical performance evaluation was undertaken by the three participating sites for a 2-month period using the ASPIRE framework. 

A 15-item survey questionnaire was developed and administered to examine the extent to which ASPIRE was considered useful, acceptable, and appropriate to allied health clinical practice evaluation, including overall satisfaction with the framework. Semi-structured group interviews were also undertaken to validate and complement the results of the survey.

Overall, the participants were positive about ASPIRE, and felt that performance evaluation using a structured framework was a worthwhile experience. The evaluation findings suggest that participants found ASPIRE a useful, appropriate, and easy to implement model for evaluating clinical performance in allied health, and that the current structure is acceptable and convenient to clinicians. They agreed that ASPIRE has addressed evaluation difficulties encountered in the past and identified the partnership with researchers as an effective strategy for encouraging allied health practitioners to evaluate performance. The participants reported that ASPIRE has improved their level of confidence and motivation to conduct evaluation. Time to gather evaluation information and difficulty in identifying performance indicators were described as barriers to performance evaluation. Strategies such as longer time for evaluation planning, face-to-face consultations with researchers (as opposed to teleconference), and a wider team involvement in the identification of performance indicators can potentially address these barriers.   



  1. Arnold, E & Pulich, M 2003, ‘Personality Conflicts and Objectivity in Appraising Performance’, The Health Care Manager, vol. 22, no. 3, pp. 227-232.
  2. Bannigan, K 2000, ‘To Serve Better: Addressing Poor Performance in Occupational Therapy’, British Journal of Occupational Therapy, vol. 63, no. 11, pp. 523- 528. 
  3. Bente, J 2005, ‘Performance measurement, health care policy, and implications for rehabilitation services’, Rehabilitation Psychology, vol. 50, no. 1, pp. 87-93.
  4. Beyan, O & Baykal, N 2012, ‘A knowledge based search tool for performance measures in health care systems’, Journal of Medical Systems, vol. 36, pp. 201-221.
  5. Chandra, A & Frank, Z 2004, ‘Utilization of performance appraisal systems in health care organizations and improvement strategies for supervisors’, The Health Care Manager, vol. 23, no. 1, pp. 25-30.
  6. Colton, D 2007, ‘Strategies for Implementing Performance Measurement in Behavioural Health Care Organisations’, Journal of Health Management, vol. 9, no. 3, pp. 301-316.
  7. Derose, S & Petitti, D 2003, ‘Measuring quality of care and performance from a population health care perspective’, Annual Review of Public Health, vol. 24, pp. 363–84.
  8. Doherty, J & DeWeaver, K 2004, ‘A Survey of Evaluation Practices for Hospice Social Workers’, Home Health Care Services Quarterly, vol. 23, no. 4, pp. 1-13.
  9. Donabedian, A 1988, ‘The quality of care: how can it be assessed?’, Journal of the American Medical Association, vol. 260, pp. 743-748.
  10. Geddes, L & Gill, C 2012, ‘Annual performance appraisal: One organization’s process and retrospective analysis of outcomes’, Healthcare Quarterly, vol. 15, no. 1, pp. 59-63.
  11. Geraedts, M, Selbmann, H & Ollenschlaeger, G 2003, ‘Critical appraisal of clinical performance measures in Germany’, International Journal for Quality in Health Care, vol. 15, no. 1, pp. 79-85. 
  12. Gilmore, L, Morris, J, Murphy, K, Grimmer-Somers, K, Kumar, S 2011, ‘Skills escalator in allied health: a time for reflection and refocus’, Journal of Healthcare Leadership, vol. 3, pp. 53-58.
  13. Gregory, R 2000, ‘Performance appraisal: a primer for the lower level health care and rehabilitation worker’, Journal of Health and Human Services Administration, vol. 22, no. 3, pp. 374-378.
  14. Hamilton, K, Coates, V, Kelly, B, Boore, J, Cundell, J, Gracey, J, McFetridge, B, McGonigle, M, Sinclair, M 2007, ‘Performance assessment in healthcare providers: a critical review of evidence and current practice,’ Journal of Nursing Management, vol. 15, pp. 773-791.
  15. Harp, S 2004, ‘The measurement of performance in a physical therapy clinical program: A ROI approach’, The Health Care Manager, vol. 23, no. 2, pp.110-119.
  16. Johansen, B, Mainz, J, Sabroe, S, Manniche, C & Leboeuf-Yde, C 2004, ‘Quality Improvement in an Outpatient Department for Subacute Low Back Pain Patients’, Spine, vol. 29, no. 8, pp. 925–931.
  17. Jolley, G, 2003, ‘Performance measurement for community health services: opportunities and challenges’, Australian Health Review, vol. 26, no. 3, pp. 133-138.
  18. Kilbourne, A, Keyser, D & Pincus, H 2010, ‘Challenges and Opportunities in Measuring the Quality of Mental Health Care’, Canadian Journal of Psychiatry, vol. 55, no. 9, pp. 549–557.
  19. Koch, J, Breland, A, Nash, M & Cropsey, K 2011, ‘Assessing the Utility of Consumer Surveys for Improving the Quality of Behavioral Health Care Services’, The Journal of Behavioral Health Services & Research, vol. 38, no. 2, pp. 234-248.
  20. Kollberg, B, Elg, M, Lindmark, J 2005, ‘Design and implementation of a performance measurement system in Swedish Health Care Services: a multiple case study of 6 development teams’, Quality Management in Health Care, vol. 14, no.2, pp. 95-111.
  21. Koss, R, Hanold, L & Loeb, J 2002, ‘Integrating Healthcare Standards and Performance Measurement’, Disease Management and Health Outcomes, vol. 10, no. 2, pp. 81-84.
  22. Loeb, J 2004, ‘The current state of performance measurement in health care’, International Journal for Quality in Health Care, vol. 16, sup. 1, pp. i5–i9.
  23. Longenecker, CO, Fink, LS 2001, ‘Improving management performance in rapidly changing organizations’, Journal of Management Development, vol. 20, no. 1, pp. 7-18.
  24. Mainz, J 2003a, ‘Defining and classifying clinical indicators for quality improvement’, International Journal for Quality in Health Care, vol. 15, no. 6, pp. 523-530.
  25. Mainz, J 2003b, ‘Developing evidence-based clinical indicators: a state of the art methods primer’, Internal Journal for Quality in Health Care, vol. 15, no. supplement 1, pp. i5-i11.
  26. Manderscheid, R 2006, ‘Some Thoughts on the Relationships between Evidence Based Practices, Practice Based Evidence, Outcomes, and Performance Measures’, Administration and Policy in Mental Health and Mental Health Services Research, vol. 33, pp. 646–647.
  27. Mannion, R & Goddard, M 2002, ‘Performance measurement and improvement in health care’,  Applied Health Economics and Health Policy, vol. 1, no. 1, pp. 13-23.
  28. Mant, J 2001, ‘Process versus outcome indicators in the assessment of quality of health care’, International Journal for Quality in Health Care, vol. 13, no. 6, pp. 475-480.
  29. Marshall, M & Davies, H 2000, ‘Performance Measurement and Management of Healthcare Professionals’, Disease Management and Health Outcomes, vol. 7, no. 6, pp. 306-314.
  30. McLoughlin, V, Leatherman, S, Fletcher, M & Owen, J 2001, ‘Improving performance using indicators. Recent experiences in the United States, the United Kingdom, and Australia’, International Journal for Quality in Health Care, vol. 13, no. 6, pp. 455-62.
  31. Morris, J, and Grimmer, K 2013, ‘Non-Medical prescribing by Physiotherapists: issues reported in the current evidence’, Manual Therapy, (accepted for publication April 2013).
  32. Morris, J, Grimmer-Somers, K, Kumar, S, Murphy, K, Gilmore, L, Ashman, B, Perera, C, Vine, K, Coulter, C 2011, ‘Effectiveness of a physiotherapy-initiated telephone triage of orthopaedic waitlist patients’, Patient Related Outcome Measures, vol. 2, pp. 1-9.
  33. Novak, J and Judah, A 2011, Towards a health productivity reform agenda for Australia, Australian Centre for Health Research, Victoria.
  34. Nuti, S, Seghieri, C, Vainieri, M 2013, ‘Assessing the effectiveness of a performance evaluation system in the public health care sector: some novel evidence from the Tuscany region experience’, Journal of Management and Governance, vol. 17, pp. 59-69.
  35. Perrin, E 2002, ‘Some Thoughts on Outcomes Research, Quality Improvement, and Performance Measurement’, Medical Care, vol. 40, no. 6, pp. III89-III91.
  36. Purbey, S, Mukherjee, K & Bhar, C 2007, ‘Performance measurement system for healthcare processes’, International Journal of Productivity and Performance Management, vol. 56, no. 3, pp. 241-251.
  37. Roper, W & Mays, G 2000, ‘Performance measurement in public health: conceptual and methodological issues in building the science base’, Journal of Public Health Management and Practice, vol. 6, no. 5, pp. 66-77.
  38. Salvatori, P, Simonavicius, N, Moore, J, Rimmer, G & Patterson M 2008, ‘Meeting the challenge of assessing clinical competence of occupational therapists within a program management environment’, Canadian Journal Of Occupational Therapy, vol. 75, no. 1, pp. 51-60.
  39. Sibthorpe, B & Gardner, K 2007, ‘A conceptual framework for performance assessment in primary health care’, Australian Journal of Primary Health, vol. 13, no.2, pp. 96-103.
  40. Smith, PC, Mossialos, E, Papanicolas, I 2008, Performance measurement for health system improvement: experiences, challenges and prospects, World Health Organisation, Denmark.
  41. Stanhope, J, Grimmer-Somers, K, Milanese, S, Kumar, S, Morris, J 2012, ‘Extended scope physiotherapy roles for orthopaedic outpatients: an update systematic review of the literature’, Journal of Multidisciplinary Healthcare, vol. 5, pp. 37-45.
  42. Sund, R, Nurmi-Lüthje, I, Lüthje, P, Tanninen, S, Narinen, A & Keskimäki, I 2007, ‘Comparing Properties of Audit Data and Routinely Collected Register Data in Case of Performance Assessment of Hip Fracture Treatment in Finland’, Methods of Information in Medicine, vol. 46, pp. 558–566.
  43. Tawfik-Shukor, A, Klazinga, N, Arah, O 2007, ‘Comparing health system performance assessment and management approaches in the Netherlands and Ontario, Canada’, BMC Health Services Research, 7:25.
  44. van der Geer, E, van Tuijil, H, Rutte, C 2009, ‘Performance management in healthcare: Performance indicator development, task uncertainty, and types of performance indicators’, Social Science & Medicine, vol. 69, pp. 1523-1530.
  45. Vasset, F, Marnburg, E, Furunes, T 2011, ‘The effects of performance appraisal in the Norwegian municipal health services: a case study’, Human Resources for Health, 9:22.
  46. Veillard, J, Champagne, F, Klazinga, N, Kazandjian, V, Arah, O, Guisset, A 2005, ‘A performance assessment framework for hospitals: the WHO regional office for Europe PATH project’, International Journal for Quality in Health Care, vol. 17, no. 6, pp. 487-496.