AI & Agentic Systems
We help organisations navigate the shift toward AI and agentic systems with confidence. Our human-centred approach covers everything from mapping AI opportunities to designing responsible interactions and governance frameworks — so that technology decisions are grounded in ethics, accountability, and genuine user value.
What are AI and agentic systems and why they matter
AI and agentic systems are technologies that can perceive, reason, and act — increasingly with less human oversight. From generative AI assistants to autonomous decision-making agents, these systems are transforming how organisations operate and serve their communities.
But with growing capability comes growing risk. The organisations getting AI right are the ones asking harder questions early:
- Where can AI genuinely improve outcomes for people — not just efficiency metrics?
- What level of autonomy is appropriate, and where must humans stay in the loop?
- How do we build trust with the communities affected by these systems?
- What governance and accountability structures need to be in place before we go live?
How we approach AI and agentic systems
We bring human-centred design to a field too often driven by technical possibility rather than human need. Our work starts by understanding the people and contexts an AI system will affect — then works backward to define what the technology should do and how much autonomy it should have.
This means we help you:
- Map opportunities where AI can create real value, not just automate for the sake of it
- Design responsible interactions between people and AI, with appropriate transparency and control
- Build governance frameworks that give leadership confidence and communities trust
- Navigate ethical complexity in sectors where errors have serious consequences
From healthcare to government to financial services
We work on AI and intelligent systems in sectors where the stakes are highest — where mistakes affect real people in real ways. This includes healthcare settings where AI assists clinical decisions, government services where automated systems determine access and entitlements, and financial services where algorithms assess risk.
Our approach is built for these high-consequence environments. We help teams think through not just what an AI system can do, but what it should do — and what safeguards are needed to protect the people it touches.
When UTS needed to bring the public into a complex conversation about facial recognition ethics, we designed an interactive tool that makes the technology's implications tangible — helping people form their own views on where new legal protections are needed.
Designing for autonomy, accountability, and trust
The hardest question in agentic systems isn't whether to build them — it's deciding how much autonomy they should have. An AI agent that schedules meetings is a fundamentally different proposition from one that triages patient referrals or matches researchers with funding.
We help organisations define appropriate levels of autonomy for each context, design the accountability structures around them, and create experiences where people understand what the system is doing and why. This is where human-centred design meets responsible AI — not as an afterthought, but as the foundation.
When the University of Melbourne wanted to help researchers discover funding they'd otherwise miss, we designed and built an intelligent matching tool — demonstrating how AI can genuinely expand opportunity when grounded in a clear understanding of user needs.
The impact of human-centred AI
Our AI work helps organisations move from hype to implementation — responsibly. We've designed AI-assisted workflows, built governance frameworks that gave leadership confidence to proceed, and created interaction patterns that earned the trust of the people affected.
The result is AI that works for people, not just on them. Systems that are transparent, accountable, and genuinely useful — delivering measurable impact while respecting the communities they serve.

