Monitoring and evaluation
We design evaluation frameworks that are proportionate, practical, and grounded in what matters to the people involved. From theory of change models to data collection instruments and synthesis reports, we help organisations track impact and feed findings back into program design in real time.
What monitoring and evaluation is and why it matters
Understanding impact is not just about counting outcomes — it is about learning what works and why. Monitoring and evaluation gives organisations the evidence they need to refine programs, demonstrate value, and make better decisions about where to invest next.
We design evaluation frameworks that are practical and proportionate, grounded in:
- Theory of change models that connect activities to long-term outcomes
- Real-time developmental evaluation that feeds learning back into delivery
- Mixed-method approaches combining quantitative data with lived experience
- Clear reporting that helps teams act on what they find
The result is evaluation that does not sit in a filing cabinet — it drives continuous improvement.
How we approach monitoring and evaluation
We believe evaluation should be proportionate, developmental, and designed to feed back into action. Our approach starts by understanding what success looks like for the people involved — then we build the tools and frameworks to track it.
What makes our M&E work distinctive:
- Collaborative design where program teams help shape what gets measured and how
- Proportionate methods matched to the scale and maturity of each program
- Developmental evaluation that surfaces insights in real time, not just at the end
- Practical outputs — frameworks, instruments, and reports that teams can actually use
We work closely with the people delivering programs, ensuring our evaluation tools strengthen rather than burden their practice.
From community services to university programs
We have partnered with organisations across local government, higher education, and community health to design evaluation that fits their context. Whether it is a council measuring the impact of a new service model, a university assessing a creative education toolkit, or a health organisation scaling a youth wellbeing program — we tailor our approach to what matters most.
Our monitoring and evaluation work spans:
- Community service performance frameworks and polling schedules
- Higher education program evaluation and creative impact assessment
- Health promotion evaluation with built-in data collection tools
- Cross-sector program logic and theory of change development
Each engagement is shaped by the specific program, but draws on patterns we have seen work across sectors.
When the City of Casey needed to know whether their new concierge service was improving customer experience, we designed a performance framework with clear metrics, a practical polling schedule, and dedicated roles to drive continuous improvement.
Building evaluation into program design from the start
The best time to think about evaluation is before a program launches — not after. When monitoring and evaluation is embedded from the beginning, organisations can track progress in real time, spot what needs adjusting, and build a credible evidence base as they go.
We help teams design evaluation into their programs from day one — creating logic models, identifying meaningful indicators, and setting up data collection processes that are realistic for the people doing the work. This means organisations are not scrambling to prove impact after the fact — they are learning and improving throughout.
When Monash University needed to understand whether their Portable Art Connections Toolkit was achieving its creative goals, we developed a theory of change, evaluation framework, and survey instruments that gave researchers clear evidence of impact.
Designing evaluation that people can actually use
The most rigorous evaluation framework is only useful if the people running the program can apply it. We design our tools with the end user in mind — whether that is a health promotion team scaling a mental health program, a council officer reporting to leadership, or a researcher analysing pilot data.
Our evaluation outputs are practical and actionable:
- Frameworks structured around clear modules that map to program phases
- Data collection tools tested with the people who will use them
- Synthesis reports that translate findings into specific recommendations
When Connect Health & Community needed to scale their youth mental health initiative Brain Bloom, we co-designed a toolkit with built-in evaluation tools so new facilitators could measure the program's impact in their own settings.
Evidence that drives better decisions
Across local government, higher education, and community health, our evaluation work has given organisations the evidence they need to make confident decisions. Whether that means refining a service model, securing ongoing funding, or scaling a program to new communities, we help teams move from assumptions to evidence.
We have designed performance frameworks that track service quality over time, built evaluation strategies that connect creative outputs to measurable outcomes, and created practical toolkits that embed evaluation into everyday program delivery.
Want to understand whether your program is making the impact you intended?


