Evaluation approach

2.1 Evaluation – purpose, role and scope

This evaluation is a requirement of Recommendation 87 of the Royal Commission. The evaluation findings are intended to inform and improve policy and drive system improvement, making it more responsive to the needs of our diverse Victorian community.

The evaluation was conducted by Deloitte Access Economics, with support for the qualitative research with people who experience and use violence undertaken by the Social Research Centre.

The evaluation objectives were to determine whether the funded activities:

  • were implemented according to plan
  • achieved their stated objectives
  • met the needs of the target cohort and victim/survivors to a greater extent than existing programs
  • presented a more effective service response. [1]
  • the evaluation will also assist to inform future funding decisions, and therefore aligns with the lapsing program guidelines as stipulated by the Department of Treasury and Finance.

The evaluation commenced in September 2018, with the first data collection phase (process) occurring in April – June 2019, and the second data collection phase (outcome) occurring in August – October 2019. Prior to the data collection, there was an extensive period of evaluation planning, including the process of gaining ethics approval from the Australian National University Human Research Ethics Committee (ANU HREC). Evaluation of the MBCP group work element was not in scope of this evaluation.

The evaluation involved two phases:

  • Interim (process) evaluation – reviewed implementation of the funded trials. This part of the evaluation considered whether the trials are being delivered at the standard and volume outlined in the service agreement, and whether they are acceptable and accessible to their target cohorts. It also considered whether the programs are achieving their desired short-term outcomes.
  • End of program (impact) evaluation – assessed the extent to which the funded trials met the needs of the target cohorts and achieved their desired outcomes.

In order to inform the evaluation and key lines of enquiry, a series of evaluation questions were developed. These included both process and outcome evaluation questions. The evaluation questions considered the appropriateness, effectiveness and efficiency of the initiatives.

Table 2.1 categorises the process evaluation questions under one of the three evaluation domains (appropriateness, effectiveness, efficiency). Questions were developed based on those outlined in the Request for Proposal, the Department of Treasury and Finance’s Lapsing Program Evaluation guidelines, and advice from Deloitte Access Economics. The questions were further refined following a workshop with a selection of service providers of the trial programs held in November 2018. Questions taken from the Mandatory Requirements for Lapsing Program Evaluation document are italicised. This evaluation is not a Lapsing Program Evaluation, but does incorporate the Department of Treasury and Finance’s Lapsing Program Evaluation questions.

Table 2.1: Evaluation questions

Evaluation domains Evaluation questions
Process Evaluation Questions
Appropriateness/Justification

What is the evidence of continued need for the initiatives and role for government in delivering the initiatives? (P1)

Have the initiatives been implemented as designed? (P2)

How are the initiatives innovative and contributing to best practice? (P3)

Effectiveness

Are there early positive signs of change that might be attributable to the initiatives? (P4)

To what extent are the outputs being realised? (P5)

Have people who use violence and people who experience violence responded positively to the program, including enrolment, attendance/retention and satisfaction? (P6)

What are the barriers and enablers to effective referral of participants? (P7)

What governance and partnership arrangements been established to support the implementation of the initiatives and are these appropriate? (P8)

Do the program workforces have a clear idea of their roles and responsibilities? (P9)

What components of the model are perceived to be the most valuable? (P10)

What improvements to the service model could be made to enhance its impact? (P11)

Have there been any unintended consequences, and if so, what have these been? (P12)

Efficiency Has the department demonstrated efficiency in relation to the establishment and implementation of the programs? (P13)
Impact Evaluation Questions
Appropriateness/Justification

Are the programs responding to the identified need/problem? (I1)

What are the design considerations of the program to support scalability? (I2)

Effectiveness

Have the program inputs, activities and outputs led to the desired change mapped out in the program logic? (I3)

To what extent have people who use violence and people who experience violence responded positively to the program, including enrolment, attendance/retention and satisfaction? (I4)

What are the drivers for effective participant engagement in the programs? Does this differ according to the different cohorts? (I5)

What is the impact of the program on victims/survivors’ perceptions of safety? (I6)

What are the barriers and facilitators to the programs being integrated into the broader service system? (I7)

What impact have the programs had on the management of risk associated with this cohort? (I8)

What impact have the programs had on referral pathways and information transfer between community services and relevant authorities? (I9)

What impact have the programs had on the confidence, knowledge and skill of the case management and service delivery workforces in supporting the target cohort in the community? (I10)

Are key stakeholders, including the program workforces, supportive of the model? (I11)

What would be the impact of ceasing the programs (for example, service impact, jobs, community) and what strategies have been identified to minimise negative impacts? (I12)

Efficiency

Have the programs been delivered within its scope, budget, expected timeframe, and in line with appropriate governance and risk management practices? (I13)

Does the initial funding allocated reflect the true cost required to deliver the programs? (I14)

2.2 Indicators of program effectiveness

To address each of the evaluation questions, a series of performance indicators were identified. These are presented in Appendix A. In addition to mapping each performance indicator to an evaluation question, the measure and data source(s) required to measure each indicator is provided.

Evaluation findings are strengthened through multiple sources of evidence (i.e., triangulation and validation of results). As such, for each evaluation question, multiple performance indicators from various data sources have been collected to provide a broad range of perspectives. Where practical, both quantitative and qualitative data was used.

2.3 Ethics approval

Ethics approval for the evaluation was granted by the HREC, through three separate ethics applications:

  • a low risk application, for data collection with providers, referral organisations, peak bodies and government employees.
  • a high risk application, for data collection involving people who use and experience violence.
  • an application for data collection involving Aboriginal and Torres Strait Islander participants.

Gaining approval from the ANU HREC necessitated extensive consultation with Aboriginal stakeholders, including the Dhelk Dja Priority 5 sub-working group, who reviewed the evaluation approach and subsequent reports [2].

2.4 Data Collection

The data collection involved a mix of primary and secondary data collection, as summarised below. Further detail is provided in Appendix B.

2.4.1 Primary data sources

Primary data sources included both qualitative interviews and a data collection tool, as described below:

  • Stakeholder interviews – consultations with non-clients, including individual providers, FSV and DHHS representatives, coordination and referral staff, and advisory and peak bodies.
  • Client interviews – a total of 87 interviews were conducted with program participants, including both face-to-face and telephone. The sampling and recruitment approach is outlined in Appendix C.
  • Service provider data collection tool – to address gaps in data availability from the Integrated Reports and Information System (IRIS) system, the data management system used by FSV/DHHS for family violence programs, data was sought directly from service providers through a data collection tool. For each program participant and victim survivor, the tool included demographic, referral and outcome information.

The limitations related to this data are discussed in Section 2.5. An evaluation readiness tool was developed to understand the data being collected by all providers to inform the preferred approach for recruiting people who use violence and people who experience violence for primary data collection, and to identify any planned or current evaluation activity being undertaken by providers.

2.4.2 Secondary data sources

There were two secondary sources of data used to inform the analysis in this report. This included:

  • FSV/DHHS data – including program and provider details, e.g. program duration, anticipated caseloads, recruitment approach, internal evaluation details, brokerage data, and governance terms of reference; deidentified participant information from the DHHS IRIS case management system, and other documentation provided by service providers, such as grant applications, acquittal reports, etc.
  • Literature scan – a literature scan focused on best practices in case management and interventions for perpetrators of family violence was conducted.

2.5 Limitations of the research

Limitations pertaining to sample size and composition, participant eligibility criteria, and provider data collection and analysis were encountered throughout the evaluation data collection approach. Findings presented in this report should be considered in the context of these limitations.

2.5.1 Sample size and composition

Firstly, the findings should be interpreted in the context of the overall sample composition. People who have experienced violence were difficult to engage in the research, with only 18 participants interviewed (compared to 69 people who have used violence). This presents difficulties when corroborating the feedback from people who have used violence with those who have experienced violence. This is an important limitation, as people who have experienced violence are considered to have a more objective point of view, particularly as it relates to observing any outcomes.

The qualitative research is not intended to provide a representative overview of the population, and thus, findings should not be generalised.

2.5.2 Participant eligibility and identification

Participant recruitment was guided by a set of criteria designed to uphold the safety of participants and researchers, while also ensuring minimal disruption to participant engagement in services. This reduced the pool of eligible participants to participate in the qualitative research. For example, one criteria was that people who used violence were only eligible to participate if the affected family member was engaged by a family safety contact or specialist family violence service, in order to manage any potential risk that could arise from the interviews. This criteria greatly reduced the number of available participants. This may explain the lower number of people experiencing violence participating in the interviews compared to people who use violence.

The recruitment of participants via service providers is an important mechanism for reducing and mitigating risk. In particular, it ensures couples are not both interviewed, and that service providers can ensure the safety of the person experiencing violence. It does however, introduce the potential for a biased sample. For example, providers may only have forwarded participants who they thought would reflect positively on the service, or perhaps participants who were more engaged in the service would be more likely to volunteer for the research. This potential risk was mitigated by the approaches adopted by the Social Research Centre, including provision of a ‘recruitment pack’ to providers and regular check-ins with providers regarding the process. These mitigation strategies were approved by the Human Research Ethics Committee.

2.5.3 Data collection tool

There are limitations with the data received from providers via the data collection tool. Of the providers who submitted the data collection tool, many had substantial gaps in content. This was not unexpected, as the tool was a new instrument, and providers were implementing the mechanisms for data collection activity at an organisational level. Some of these limitations were rectified between phase one (data provided to the evaluators in July 2019) and phase two (data provided to the evaluators in September 2019). Additional training was provided, to emphasise the need to complete all fields (rather than leave blanks) and how to interpret particular fields such as referrals. Despite some improvement between phase one and phase two, there were still significant gaps in the data, and further work is needed to ensure data is consistently recorded by providers moving forward.

These gaps do, however, make the data unsuitable for drawing robust conclusions on program outcomes at this point in time, or being able to make any substantiated claims or comparisons at a cohort level. Particularly for the data collected on participant outcomes, there are significant gaps in exit data and equivalent data for people who experience violence, with which to make valid comparisons against the entry level data.

2.5.4 Time frame

It is recognised that changing behaviour can be a long and complex process, that can require multiple interventions. This evaluation collected data about people who used violence who had received one of the interventions within 12 months of the evaluation commencing. As a result, the evaluation was not able to capture any long-term or longitudinal data to determine the effectiveness of the programs over a longer timeframe.


[1] Request for Quote - Evaluation of new community-based perpetrator interventions and case management trials, Department of Health and Human Services

[2] While an important process, these additional activities meant that interviews with Aboriginal participants were delayed during the process phase of the evaluation. The additional consultation required to gain ethics approval for culturally diverse clients resulted in similar restrictions. As a result, these cohorts received one round of interviews over an extended period of time.

Updated