The FBI Says Hackers are Using AI for Cyberattacks

Reading time icon 3 min. read


Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

FBI Says Hackers are Using AI for Cyberattacks

Speaking to journalists, the FBI said hackers are now relying on AI programs to aid them in cyberattacks.

According to the governmental body, artificial intelligence helps them create malware and perpetrate phishing schemes.

Artificial intelligence is helping hackers

Previously, internet wrongdoers used ChatGPT to develop malware that can bypass all kinds of security systems.

Then, they took the next step by introducing AI models designed specifically to create all sorts of schemes. Those such as WormGPT and FraudGPT come to mind.

The former can create emails that look exactly like those from your bank or employer. The latter is famous for designing credit card frauds.

The authorities recently noticed hackers advertising WormGPT on cybercrime forums. When describing it, the sellers said it’s a ChatGPT-style tool without any ethical boundaries or limitations.

It was always anticipated that hackers would start developing such tools. This is mostly because ChatGPT and similar tools refuse to respond to users who ask for something they can use for illegal purposes.

Still, there’s so much cybercriminals can do with regular AI models. For example, they can use them to design fake websites to later fill with malicious links.

They also rely on information from ChatGPT to help them pose as experts in various fields as a part of their schemes.

While the companies behind “good-guy” chatbots can be held accountable for their AI models, it’s extremely difficult to employ any regulations with underground chatbots.

Cybersecurity experts have raised concerns about hackers manipulating AI models as soon as these tools exploded in popularity.

Researchers started looking for examples of this, and they’ve already revealed some worrying trends.

For example, they discovered one user posted advice on how to use ChatGPT to design Business Email Compromise (BEC) attack emails.

Not so long ago, grammar and spelling mistakes often revealed suspicious messages from internet criminals. But with AI, even those who don’t speak English well can send as many genuine-looking emails as they want.

Researchers even tried to make ChatGPT design phishing emails with very little manipulation and encountered no problems doing so.

However, the rise of AI chatbots whose main purpose is to create malicious content is even more concerning.

AI models may still be in their infancy, but hackers are already looking for even more opportunities to employ them in their schemes. For example, they’re already using voice cloning for perfidious purposes.


Leave a Reply

Your email address will not be published. Required fields are marked *