When 'Just Paste It Into ChatGPT' Goes Horribly Wrong

Published on: 08-10-2025

No need for hackers, you're leaking data all by yourself...

Written by Stephen Cox

Data breach

#CyberSecurity

#PerthBusiness

#DigitalSafety

#SmallBiz

#CyberSafe

#EssentialEight

Another week, another data breach

This time, a NSW government contractor decided it was a great idea to upload sensitive flood victim data—yes, actual personal and health information—into ChatGPT. Because apparently, nothing says “responsible data handling” like handing it to a global AI platform with servers who-knows-where.

According to ABC News, the spreadsheet contained private details of up to 3,000 Northern Rivers flood victims—names, addresses, contact info, and health records. The kicker? It wasn’t disclosed for months. So not only was there a breach, but also a delay in owning up to it. Double yikes.

What Went Wrong (Besides Everything)

Here’s the short version: The NSW Reconstruction Authority’s contractor uploaded the data into ChatGPT—a tool that operates outside Australia’s data protection laws. That simple act triggered multiple issues: privacy violations, data sovereignty breaches, and a possible AI training nightmare.

Australian privacy law (yep, that good old Privacy Act 1988) is crystal clear: personal data must stay protected, controlled, and local—especially when it comes to sensitive information. Uploading it to an overseas AI tool? That’s like sending your tax return to a random Reddit user for “feedback.”

And if you’re caught? We’re talking fines of up to $50 million or 30% of company turnover. Not exactly a rounding error.

The Data Sovereignty Problem: Whose Rules Apply?

Data sovereignty means your data should be governed by Australian law. But when you feed info into tools like ChatGPT or Copilot, you lose that control. The data could end up anywhere—California, Dublin, or some mystery server farm in who-knows-where.

That’s not just a technical problem—it’s a legal one. Under Australian Privacy Principle 8, organisations must ensure overseas recipients actually protect the data. Spoiler: most generative AI tools don’t.

The Hidden AI Risk Nobody Mentions

Tools like ChatGPT and Microsoft Copilot are brilliant for speeding up work—until they’re not. Anything you type in could be stored, reused for model training, and impossible to delete later. That’s a compliance nightmare wrapped in convenience.

Most employees don’t realise this. And honestly, that’s not their fault. When businesses don’t train staff properly or set clear policies, “helpful” becomes “hazardous” fast.

Once a breach hits, the costs go way beyond a bad headline. You’re looking at mandatory reporting, forensic investigations, compensation claims, and a PR mess that takes months to clean up. Even if you wanted to delete the data from ChatGPT, good luck finding where it actually lives now.

How to Avoid Becoming the Next Headline

🚫 Never upload identifiable personal data into generative AI tools.

🧠 Train your team—make sure they know what “AI safe” actually means.

🔍 Audit regularly for compliance with the Privacy Act and Australian Privacy Principles.

📞 Ask before you act. When in doubt, talk to privacy or legal experts.

Because one careless “copy–paste” could cost you millions, your reputation, and a few sleepless nights.

If you’re unsure how to protect your data or use AI safely, let’s talk before you end up in the headlines. 👉 Contact Transfigure IT