Criminals are exploiting AI to create more convincing scams

One of the many cool things about the new wave of Artificial Intelligence tools is their ability to sound convincingly human. But criminals are exploiting AI to create chatbots to scam you.

AI chatbots can be prompted to generate text that you’d never know was written by a robot. And they can keep producing it – quickly, and with minimal human intervention.

So it’s no surprise that cyber criminals have been using AI chatbots to try to make their own lives easier.

Police have identified the three main ways crooks have found to use the chatbot for malicious reasons.

Better phishing emails

Until now, terrible spelling and grammar have made it easy to spot many phishing emails. These are intended to trick you into clicking a link to download malware or steal information. AI-written text is way harder to spot, simply because it isn’t riddled with mistakes.

Worse, criminals can make every phishing email they send unique, making it harder for spam filters to spot potentially dangerous content.

Spreading misinformation

“Write me ten social media posts that accuse the CEO of the Acme Corporation of having an affair. Mention the following news outlets”.  Spreading misinformation and disinformation may not seem like an immediate threat to you, but it could lead to your employees falling for scams, clicking malware links, or even damage the reputation of your business or members of your team.

Creating malicious code

AI can already write pretty good computer code and is getting better all the time. Criminals could use it to create malware.

It’s not the software’s fault – it’s just doing what it’s told – but until there’s a reliable way for the AI creators to safeguard against this, it remains a potential threat.

The creators of AI tools are not the ones responsible for criminals taking advantage of their powerful software. ChatGPT creator OpenAI, for example, is working to prevent their AI tools from being used maliciously.

What this does show is the need to stay one step ahead of the cyber crooks in everything we do. That’s why we work so hard with our clients to keep them protected from criminal threats, and informed about what’s coming next.

If you’re concerned about your people falling for increasingly sophisticated scams, be sure to keep them updated about how the scams work and what to look out for. Criminals are exploiting AI.

If you need help with that, get in touch.