Evaluation is an essential aspect of all Leading Better Value Care clinical initiatives. Rigorous evaluation will assess the quantum of benefits achieved for each initiative, enabling informed decision making around investment, reinvestment and disinvestment.
The clinical initiatives are key areas for evaluation to ensure NSW is meeting its targets in placing the focus of healthcare on value rather than volume through improving patient outcomes, patient and staff experience of care provision and the subsequent effectiveness and efficiency of that care.
The primary objectives of evaluating the clinical initiatives are to examine the impact of the clinical initiatives focussing on the quadruple aim. This will take into account the focus of:
- getting clinical processes right resulting in efficient care
- enhanced capacity and avoided costs, accelerating key strategies that have demonstrated benefit for patients and the system identifying the appropriate sites for scale up, and
- to consolidate projects that are shown to improve patient experience and reported outcomes enabling effective and efficient care.
The evaluation will necessarily be connected to the healthcare system to enable effective development, monitoring, performance and using evaluation results to further inform improvements in the system. This will comprise:
- clear and comprehensive links to roadmaps
- links to performance reporting systems
- defining common cohorts for each program allowing comparison across sites to assess the quantum of impact and benchmarking on good practice
- feasible reporting frequencies that align with data availability.
To achieve these objectives, one set of data is required to ensure consistency and to ensure that there is no duplication. The data will be comprehensive to enable different uses for the various subsets of evaluation including Roadmap milestone measures and Performance reporting systems.
A logical sequencing of evaluation inclusive of feedback loops will be planned in the development stage to ensure the above objectives can be met. Thus, evaluation is inherently linked to program design and where feasible, will be a key focus from program design and inception to ensure appropriate focus, measures and linkages are in place.
Figure 1 shows an overview of evaluation sequencing.
To provide maximum results, all programs will undergo an evaluability assessment to assess the readiness of sites to participate in program implementation and evaluation and enable inter-site comparisons. Evaluation planning will include the development of monitoring systems that systematically assess the progress of a program towards achieving outcomes.
Monitoring measures will be based on financial reporting and implementation milestones and be assessed through Roadmaps of the lead pillar agency (ACI, CEC or CINSW). Although essential to assess implementation strategies and underlying program theories to determine the extent in which a program is in place and the local contexts, the predominant focus of evaluation will be on the impact of the programs using the quadruple aim (health outcomes that matter to patients, experiences of receiving care, experiences of providing care and effectiveness and efficiency of care) as the key underpinning.
This monitoring will be a sub-set of evaluation. Performance reporting will be another sub-set of evaluation and use the same dataset. This will be reported through Service Agreements between the Ministry of Health and the LHDs/SHNs or with the participating pillar agencies.
Monitoring processes will be provided to the Ministry of Health through the Roadmaps processes and improvement plans developed where required. Service Agreements will reflect the performance reporting requirements. Results of impact evaluations will be provided to the Senior Executive Forum to contribute to decision making processes.
Figure 2 shows a high level overview of the monitoring and evaluation approach for the clinical initiatives.