Paper Giant

InterviewsNovember 25th, 2025

Dr Robbie Fordyce on Democracy, Disinformation, and Digital Guardrails

Dr Chris Marmo
Dr Chris Marmo, CEO and Co-Founder

Traditional approaches to technology regulation ask for individual literacy and personal responsibility. But what if the most dangerous aspect of AI isn't that it replaces workers, but that it replaces the very concept of who we compete with for work?

Dr Robbie Fordyce, a self-described "less toxic tech bro" at Monash University, examines emerging technologies at precisely the moments they reshape society. Fresh from presenting evidence to the parliamentary inquiry into Australia's 2025 federal election, he explores how artificial intelligence fundamentally restructures everything from democratic discourse to labour relations — not by replacing humans, but by replacing the very idea of what we compete against.

How do you describe yourself to people who ask?

Dr Robbie Fordyce: I'm a lecturer at Monash University researching digital technologies — especially if they're interactive, such as video games, or if they're emerging, such as AI or, oddly, 3D printing when it was just new.

The way I often talk about myself to other people is as a sort of "less toxic tech bro," which explains both the personal position I bring to this, which is generally reasonably left-wing and progressive, but also the diversity of things I'm prepared to actually stick my hands into.

You’ve been doing some long-term research on misinformation and elections that you recently presented at a Parliamentary inquiry. What did you learn?

Robbie: We've been researching this since 2019, testing the public hypothesis about Chinese government interference. In all our research, we've not seen anything we can identify as a government conspiracy from China or any other foreign nation.

What we see is disinformation starting in the English media sphere coming onto WeChat, intensifying, then going back out into the Australian media sphere with more profound effects. My favourite case was a rumour that the Pope's death would change Australia's election date. Completely false, but it spread across platforms until we got it to the AEC.

The platforms have a business model that massively benefits from disinformation. It draws people back, engenders conspiracy groups prepared to commit hours scrolling. Think about the "sand machine pumping out sand at a high pace" rather than individual grains.

If every communicative act is a possible problem source, how is the AEC going to manage that detection problem? The platforms make money off disinformation. If they're prepared to profit, they should be prepared to pay to monitor disinformation more carefully.

Have you learned much about Generative AI in these types of campaigns?

Robbie: We haven't seen significant AI generating disinformation yet. But I think we'll see massified campaigns like India's last election — AI generating meme content in huge volumes where recirculation potential is high.

Rather than nuanced campaigns that travel far, it's a "logic of virality" — you have a bunch of content produced around themes to keep talking points active, then leave it to be recirculated by platform accounts. AI is useful for generating content with great degrees of variety that can be posted into platforms like Instagram or Facebook.

The way this probably works is you just have content around a particular theme to keep a talking point or concern active in the public consciousness, then let the community do the heavy lifting of recirculation.

You’re about to start some research on the impact of AI on knowledge workers. What are you interested in finding out?

Robbie: I don’t think we can measure AI's impact purely from traditional business metrics like revenue and subscriptions.

I’m interested in understanding what bargaining power workers have when the thing you're competing with is, to all intents and purposes, an automated slave?

Marx's "reserve army of labour" — the mass of unemployed people — is who you compete with for jobs. When applying for work, your employer decides: should I continue bargaining with you, or go back to the reserve army to find someone else?

Automation has done this before, but now it's targeting people who've never dealt with this — middle to upper-class white-collar workers. It's a big psychic impact on young people entering the job market.

We see executives coming in saying they need to implement this technology because everyone's doing it, then almost their entire workforce pushes back because they're terrified about what it means for them.

With AI, before it replaces anyone, it replaces that concept. You're no longer competing with another living person — you're competing with a concept of massified labour that is no longer reliant on bodies.

You mention the impact on young people there. What are you finding, as a lecturer to people about the enter the workforce?

Robbie: Young people are really scared. They’re not sure what their work is going to be, they don’t know whether it will be even possible to have a career. My students are telling me that they’re holding on to jobs in hospitality and retail that they hate because they really don’t know what the future holds for them. We need to do more to support the next generation of workers get started.

Outside of work impacts, there’s emergent research on the impacts of AI on children and young people’s relationships too, particularly in attachment to chatbots and celebrity emulators.

Robbie: We are. It's about lack of control in their lives, or lack of substance around personal connection they're not getting elsewhere. AI is appealing because it can be extremely obsequious — it infers what you want and gives it to you. It's mimicking a relationship that's extremely inequitable.

There are psychic effects on individuals using AI to address shortcomings — access to therapy, mental health services, partners. We've created social conditions where young people feel so disconnected that an AI companion seems like their best option.

I don't think we should have a situation where only one person in a relationship is being served.

We’re seeing early attempts to regulate access now, particularly around age verification. Can this actually work without mass surveillance?

Robbie: First I’d say that there's nothing biologically magical that happens between being aged 17 and 364 days and turning 18, so age controls are fairly arbitrary.

On top of that, I cannot see age control working without a government escrow database with facial recognition against government ID — meaning surveillance of everyone's internet use.

It'll lead to weird enforcement inequalities. People signing into a government system for Facebook, but hardcore pornographic sites? No one's trusting those with biometric data.

What should we be thinking about as a society, particularly in relation to the impact of AI on young people?

Robbie: Literacy is absolutely necessary, but not just detecting AI content or knowing how to use it. It's a deeper ethical question about our involvement in these larger systems. Is it worthwhile using AI to develop a fitness plan? Maybe. But what are the consequences for our agency and self-direction?

It's like climate change. These problems are much bigger than anything any one individual can do anything about. Even working collectively at the scale of hundreds of thousands leads to questionable impact. You're not going to stave off AI's impacts just by having people individually doing things. We need to think collectively, through political action, because it's not going to be solved at an individual level.

What are the hard things we actually need to do?

Robbie: It's going to be easy to tell your kids what to do. It's easy to make individual decisions. But what are the hard things we're going to do collectively as private citizens, or in terms of legislation?

Telling a kid to use AI more so they understand it? That's not going to fix anything. We need to have a bigger conversation as a society about who we want to be.

Dr Robbie Fordyce is a lecturer in Communications and Media Studies at Monash University. He recently provided evidence to the Australian parliamentary inquiry into the 2025 federal election.

Sign up to Paper Giant

Each month, our team share their thoughts on design-related topics, reflect on current social issues and share what’s happening in and out of the studio. We'll also include an invitation to our monthly meet up, Office Hours. We'd love you to join us.

Three paper airplanes flying through the air into people's inboxes.
Paper Giant

aboriginal torres-strait-flag-aef0540607072f1ce16f935008c2924e

We pay our respects to the traditional custodians of the lands on which we live and work, and to the traditional custodians of the lands and waters which we may visit upon in our work. We acknowledge their elders past and present. Indigenous sovereignty has never been ceded. It always was, and always will be Aboriginal land.

LGBTQ-flag-697ae3061d5202c4db61c0d0b3829b50

Paper Giant is a proudly inclusive organisation and an ally of the LGBTIQ+ community.