Cybercriminals are Showing Hesitation to Utilize AI When Executing Cyber Attacks

Media reports highlight the sale of LLMs like WormGPT and FraudGPT on underground forums. Fears mount over their potential for creating mutating malware, fueling a craze in the cybercriminal underground.

Concerns arise over the dual-use nature of LLMs, with tools like WormGPT raising alarms. 

The shutdown of WormGPT adds uncertainty, leaving questions about how threat actors view and use such tools beyond publicly reported incidents.

Document
Protect Your Storage With SafeGuard

Is Your Storage & Backup Systems Fully Protected? – Watch 40-second Tour of SafeGuard

StorageGuard scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage and backup devices.

Cybercriminals are Showing Hesitation

AI isn’t a hot topic on the forums Sophos researchers examined, with fewer than 100 posts on two forums compared to almost 1,000 posts about cryptocurrencies.

Possible reasons include AI’s perceived infancy and less speculative value for threat actors compared to established technologies.

LLM-related forum posts heavily focus on jailbreaks—tricks to bypass self-censorship. The concerning thing is that the jailbreaks are publicly shared on the internet through various platforms. 

Despite threat actors’ skills, there’s little evidence of them developing novel jailbreaks.

Many LLM-related posts on Breach Forums involve compromised ChatGPT accounts for sale, reflecting a trend of threat actors seizing opportunities on new platforms.

ChatGPT accounts for sale (Source - Sophos) 
ChatGPT accounts for sale (Source – Sophos) 

The target audience and potential actions of buyers remain unclear. Researchers also observed eight other models offered as a service or shared on forums during their research.

Here below, we have mentioned those eight models:-

Exploit forums show AI-related aspirational discussions, while lower-end forums focus on hands-on experiments. Skilled threat actors lean towards future applications, while less skilled actors aim for current use despite limitations.

Besides this, researchers also observed that with the help of AI, a multitude of codes were generated for making the following types of illicit tools:-

  • RATs
  • Keyloggers
  • Infostealers

Some users explore questionable applications for ChatGPT, including social engineering and non-malware development. 

Skilled users on Hackforums leverage LLMs for coding tasks, while less skilled ‘script kiddies’ aim for malware generation. 

Operational security errors are evident, such as one user on XSS openly discussing a malware distribution campaign using ChatGPT for a celebrity selfie image lure.

Selfie generator (Source - Sophos)
Selfie generator (Source – Sophos)

Operational security concerns arise among users about using LLMs for cybercrime on platforms like Exploit. 

Some users on Breach Forums suggest developing private LLMs for offline use. However, the philosophical discussions on AI’s ethical implications reveal a divide among threat actors.

Experience how StorageGuard eliminates the security blind spots in your storage systems by trying a 14-day free trial.

Tushar is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under his belt in Cyber Security, he is covering Cyber Security News, technology and other news.