Appendix B: Glossary of key terms

Collective Impact

Collective impact is a collaborative approach to addressing complex social issues, consisting of five conditions: a common agenda; continuous communication; mutually reinforcing activities; backbone support; and shared measurement.


By ‘community’ we mean local people and organisations that live, work or operate in a place. This can include local people, businesses, service providers, associations, etc.


Assessing contribution involves determining if the program contributed to or helped to cause the observed outcomes. Contribution reflects that in some cases the program is not the only cause of a change, but is part of the cause. In this case, evaluators say that the program contributed to the change.

Cost Benefit Analysis (CBA)

CBA is a widely used method to estimate all costs involved in an initiative and possible benefits to be derived from that initiative. It could be used to provide as a basis for comparison with other similar initiatives.

Cost-Effectiveness Analysis (CEA)

Cost effectiveness relates to a judgement as to whether the same outcome could have been achieved through a less costly program design and is used as an alternative method to cost-benefit analysis. It is used to examine the relative costs to the outcomes of one or more interventions.

Data sovereignty

Data sovereignty refers to the right of Indigenous people to exercise ownership over Indigenous Data. Ownership of data can be expressed through the creation, collection, access, analysis, interpretation, management, dissemination and reuse of Indigenous Data.

Developmental evaluation

This type of evaluation supports the development and creation of initiatives. It is useful in innovative, complex and uncertain contexts by real-time feedback to guide decision making and practice.


The systematic process of collecting and synthesising evidence to determine the merit, worth, value or significance of an initiative to inform decision-making.

Evaluation principles

Evaluation principles outline the approach to evaluation that we put forward as being relevant and viable for PBAs.


The extent to which an initiative or project meets its intended outputs and/or objectives, and/or the extent to which a difference is made. At the level of the purpose described in an entity’s corporate framework, for example, is the extent to which the purpose is fulfilled and provides the benefits intended. At the level of an activity, it is the extent to which it makes the intended contribution towards a specific purpose.

Formative evaluation

Refers to evaluation conducted to inform decisions about improvement. It can provide information on how the program might be developed (for new programs) or improved (for both new and existing programs). It is often done during program implementation to inform ongoing improvement, usually for an internal audience. Formative evaluations use process evaluation, but can also include outcome evaluation.


The ultimate difference or net benefit made by an intervention (usually longer term). It refers to measures of change that result from the outputs being completed and outcomes being achieved. Compared to the combined outcome of activities contributing to a purpose, impacts are measured over the longer term and in a broader societal context.


An indicator is a simple statistic recorded over time to inform people of changing trends.


The translation of findings from data to improve and develop things as they are being implemented. Strategic and adaptive learning involves the translation of findings from data into action. Data can come in all forms, including monitoring and evaluation data, population indicators, from research studies.

Mixed methods

Research or evaluation that uses both quantitative and qualitative data collection and methods.


Monitoring refers to the routine collection, analysis and use of data, usually internally, to track how an initiative’s previously
identified activities, outputs and outcomes are progressing.

Participatory evaluation

Participatory evaluation is an approach that involves the stakeholders of a program, initiative or policy in the evaluation process. This involvement can occur at any stage of the evaluation process, from the evaluation design to the data collection and analysis and the reporting of the study.


By ‘place’ we mean a geographical area that is meaningfully defined for our work.

Place-based approach (PBA)

‘Place-based approaches’ target the specific circumstances of a place and engage local people from different sectors as active participants in development and implementation. They can happen without government, but, when we are involved, they require us to share decision-making with community to work collaboratively towards shared outcomes.

Place-based initiative

The program, organisation or group based in the community, working as part of a place-based approach.


By ‘power’ we mean the ability to control or influence, or be accountable for, decisions and actions that effect an outcome throughout the design, implementation and evaluation of programs or initiatives. The systems and structures that produce or reinforce power are complex and shifting these is difficult.

Process evaluation

Evaluation focused on improving your understanding of the activities that are delivered as part of a project and assess whether they have been implemented as planned.

Program logic

A visual depiction of the program theory and logic behind how activities lead to outcomes. It is usually represented as a diagram that shows a series of causal relationships between inputs, activities, outputs, outcomes and impacts.


Information or observations that emphasises narrative rather than numbers. Qualitative inquiry involves capturing and interpreting the characteristics of something to reveal its larger meaning.


Information represented numerically, including as a number (count), grade, rank, score or proportion. Examples are standardised test scores, average age, the number of grants during a period or the number of clients.


To give a spoken or written account of something that one has observed, heard, done, or investigated.


An attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to define consistent criteria for grading or scoring. Rubrics allow all stakeholders to see the evaluation criteria (which can often be complex and subjective).

Summative evaluation

Refers to evaluation to inform decisions about continuing, terminating or expanding a program.

It is often conducted after a program is completed (or well underway) to present an assessment to an external audience. Although summative evaluation generally reports when the program has been running long enough to produce results, it should be initiated during the program design phase.


Clear statements of the targeted changes or results expected from the initiative.

Social Return on Investment (SROI)

SROIuses a participatory approach to identify benefits, especially those that are intangible (or social factors) and difficult to monetise.

Systems change

Systems are composed of multiple components of different types, both tangible and intangible. They include, for example people, resources and services as well relationships, values and perceptions. Systems exist in an environment, have boundaries, exhibit behaviours and are made up of both interdependent and connect parts, causes and effects. Social systems are often complex and involve intractable, or ‘wicked’ problems.

Theory of change

An explicit theory of how the intervention causes the intended or observed outcomes. The theory includes hypothesised links between (a) the intervention requirements and activities, and (b) the expected outcomes. Theory of change is often used interchangeably with program theory.

Value for money

Value for money is a judgement based on the costs of delivering programs, the effectiveness of the outcomes and the equity of delivery to participants.