Leading Better Value Care Program
Commencing in 2017/18 the NSW Health system will refocus - away from the traditional approach of measuring value in terms of volume/output in relation to costs, to measuring value in terms of the Institute for Healthcare Improvement Triple Aim of health outcomes, experience of care and efficient and effective care (in relation to costs). In this context, health outcomes are defined as the outcomes that matter to patients. A key goal is to improve value for patients and in doing so, unite the interest of all health care system stakeholders (e.g. patients and their families, NSW residents, clinicians, LHD/SHN, Pillars and the Ministry of Health (MoH)).
Leading Better Value Care (LBVC) seeks to identify and implement opportunities for delivering value based care to the people of NSW. Opportunities are grouped into three categories: Clinical Initiatives, Commissioning and Contestability, and Workforce Capability. As immediate priorities, 13 initiatives have been selected which align to existing work efforts that have been “road-tested” to varying degrees across the system.
The eight clinical initiatives, supported by a Pillar organisation (Agency for Clinical Innovation or the Clinical Excellence Commission), introduce new or improved models of care for:
- Management of Osteoarthritis (ACI)
- Osteoporotic Refracture Prevention (ACI)
- Local musculoskeletal service (ACI)
- Diabetes High Risk Foot Services (ACI)
- Inpatient Management of Diabetes Mellitus (ACI)
- Management of Chronic Heart Failure (ACI)
- Management of Chronic Obstructive Pulmonary Disease (ACI)
- Renal Supportive Care (End Stage Kidney Disease (ACI)
- Adverse Events: Falls in Hospitals (CEC)
A key goal of the LBVC Program is to create shared priorities across the NSW health system so that the system works together to improve health outcomes, to improve the experience of care and provide efficient and effective care. The main components of this approach include the following.
- The MoH will continue as system administrator, purchaser and manager and will articulate the priorities for NSW Health through Service Agreements with LHDs, SHNs and Pillars. Performance against delivery of the priorities will be monitored in line with the NSW Health Performance Framework.
- LHD/SHNs will determine implementation plans reflective of their local circumstances. The Pillars, as required, will support LHDs in a flexible and customisable manner to meet individual LHD needs.
- The LBVC Program initiatives will be evaluated through Pillar’s Evaluation and Monitoring Plans. The primary objective is to assess the impact of the initiatives across the triple aim. As some monitoring measures are yet to be developed, measurement across the triple aim will evolve.
- A Measurement Alignment Framework is being developed to support:
- a systematic and strategic approach to measurement of the eight initiatives
- system priorities and informed decision making
- consistency (e.g. patient cohort definitions, across triple aim)
- streamlined collection, analysis and reporting
- development of the evaluation and monitoring process.
Presentation by Elizabeth Koff, NSW Health and Medical Research Exchange, 7 November 2016 | View slides
Monitoring and evaluation approach for Leading Better Value Care Clinical Initiatives
Background and purpose
Evaluation will be an essential aspect of all Leading Better Value Care Clinical Initiatives. Rigorous evaluation will assess the quantum of benefits achieved for each program enabling informed decision making around investment, reinvestment and disinvestment.
This approach is consistent with addressing the findings in the NSW Auditor General’s Report (November 2016) that notes that Government decision-makers are not receiving enough information to make evidence-based investment choices, and there is little assurance that the right programs are being evaluatedi. Being a priority for NSW Health and pillar agencies, the clinical initiatives are key areas for evaluation to ensure NSW is meeting its targets in placing the focus of healthcare on value rather than volume through the improvement of improving patient outcomes, patient and staff experience of care provision and the subsequent efficiency and effectiveness of that care (referred to as the triple aim that defines patient, staff and system benefits as the key focus).
The primary objectives of evaluating the clinical initiatives are to examine the impact of the clinical initiatives focussing on the triple aim. This will take into account the focus of:
- getting clinical processes right resulting in efficient care,
- enhanced capacity and avoided costs, accelerating key strategies that have demonstrated benefit for patients and the system identifying the appropriate sites for scale up,
- and to consolidate projects that are shown to improve patient experience and reported outcomes enabling effective and efficient care.
The evaluation will necessarily be connected to the healthcare system to enable effective development, monitoring, performance and using evaluation results to further inform improvements in the system. This will comprise:
- clear and comprehensive links to Roadmaps
- links to performance reporting systems
- defining common cohorts for each program allowing comparison across sites to assess the quantum of impact and benchmarking on good practice
- feasible reporting frequencies that align with data availability.
To achieve these objectives, one set of data are required to ensure consistency and to ensure that there is no duplication. The data will be comprehensive to enable different uses for the various subsets of evaluation including Roadmap milestone measures and Performance reporting systems. A logical sequencing of evaluation inclusive of feedback loops will be planned in the development stage to ensure the above objectives can be met. Thus, evaluation is inherently linked to program design and where feasible, will be a key focus from program design and inception to ensure appropriate focus, measures and linkages are in place.
Figure 1 shows an overview of evaluation sequencing.
To provide maximum results, all programs will undergo an evaluability assessment to assess the readiness of sites to participate in program implementation and evaluation and enable inter-site comparisons. Evaluation planning will include the development of monitoring systems that systematically assess the progress of a program towards achieving outcomes. Monitoring measures will be based on financial reporting and implementation milestones and be assessed through Roadmaps of the lead Pillar Agency (ACI or CEC). Although essential to assess implementation strategies and underlying program theories to determine the extent in which a program is in place and the local contexts, the predominant focus of evaluation will be on the impact of the programs using the triple aim as the key underpinning. This monitoring will be a sub-set of evaluation. Performance reporting will be another sub-set of evaluation and use the same dataset. This will be reported through Service Compacts between the MoH and the participating Pillar Agencies and through Service Agreements with Local Health Districts.
Monitoring processes will be provided to Ministry of Health (MoH) through the Roadmaps processes and improvement plans developed where required. Service Compacts will reflect the performance reporting requirements. Results of impact evaluations will be provided to the SEF to contribute to decision making processes.
Figure 2 shows a high level overview of the monitoring and evaluation approach for the clinical initiatives.