NewslettersOctober 7th, 2025
Generative AI & Work-slop

As I write this, news has emerged that Deloitte Australia have been caught red-handed using a large language model to produce a $439,000 government report that contained citations for academic studies that don't exist, and referenced a made-up court judgement. After months of denials, they're now refunding part of the fee.
The ultimate irony? This was a report intended to examine the failures of robodebt's successor system—itself part of one of the worst cases globally of automated injustice gone wrong.
This is textbook "work-slop"—what Stanford researchers call AI-generated content that looks professional, sounds confident, but ultimately has no substance and wastes everyone's time. According to their study, 40% of workers received work-slop in the last month, costing nearly two hours per incident to fix.
There are any number of ethical concerns around the using LLMs—from the power needed to keep them running, through to their various applications across sectors and society. But knowledge workers using them to shortcut 'work products' is absolutely one of the drivers of their incredible uptake.
Everyone has a choice to make in the use of these tools. We can be pilots—directing them purposefully towards valuable outcomes—or passengers, hitting enter on prompts and passing along the mess.
But before we even get to using these tools, we need to create space for communities to shape how they should and shouldn't be used. This month's newsletter presents two different approaches to ethical AI engagement.
The first shows how we're working with communities to explore facial recognition technology through speculative scenarios and playful simulations—not to implement the tech, but to gather public input that informs protections and sets the guardrails. It's about creating conversations, ensuring those affected have a voice in how these systems might impact their lives.
The second explores how AI image generation tools can be "piloted" by designers to jump start creative exploration and support collaboration. Not as a replacement for human work, but to enable faster learning and shared discovery.
Both examples share a commitment to transparency and human oversight. Whether we're facilitating public dialogue about AI ethics or using AI tools in our practice, the difference between work-slop and meaningful work isn't the technology—it's the intention, oversight, and mindset we bring to it.
