3000+ Discussions on Dark Web Posts to Use ChatGPT for Illegal Purposes

For the multitude of malicious activities, threat actors could exploit ChatGPT due to its conversational abilities, such as generating convincing phishing messages, crafting sophisticated social engineering attacks, and automating the production of misleading content.

Hackers can exploit the capacity of the model to understand and generate human-like text to trick users and automate fraudulent schemes, which makes it an attractive tool for them.

EHA

Kaspersky’s Digital Footprint Intelligence service recently discovered more than 3000 discussions on Dark Web posts to use ChatGPT for illicit purposes.

Document
Run Free ThreatScan on Your Mailbox

AI-Powered Protection for Business Email Security

Trustifi’s Advanced threat protection prevents the widest spectrum of sophisticated attacks before they reach a user’s mailbox. Try Trustifi Free Threat Scan with Sophisticated AI-Powered Email Protection .

Spike in Discussions Regarding the Illegal use of ChatGPT

Researchers noted a significant rise in Dark Web discussions on misusing ChatGPT. From January to December 2023, threat actors discussed using ChatGPT for illegal activities, like creating polymorphic malware to evade detection. 

One suggestion involved using the OpenAI API to generate malicious code through a legitimate domain that poses a security threat. However, no such malware has been detected yet by security analysts, but it could emerge later.

Polymorphic malicious code (Source - Kaspersky)
Polymorphic malicious code (Source – Kaspersky)

Threat actors commonly leverage ChatGPT for malicious purposes by using AI to tackle challenges like processing user data dumps. 

Even tasks requiring expertise are simplified with ChatGPT’s generated answers which lowers the entry barriers into various fields, including criminal ones. This trend may escalate potential attacks, as even beginners can also perform actions that once demanded experienced teams. 

An example involves a user seeking a team for carding and engaging in illegal activities, mentioning the active use of AI in code writing, particularly for parsing malware log files. This ease of use poses risks in multiple domains.

Several types of ChatGPT-like tools were integrated by the cybercriminal forums for standard tasks. Threat actors use tailored prompts which are known as jailbreaks to unlock additional functionalities. 

In 2023, 249 offers to sell these prompt sets were found, and some users collected the prompt sets, but not all are intended for illegal actions. The AI developers aim to limit harmful content but may unintentionally provide sensitive information.

GitHub hosts open-source tools for obfuscating PowerShell code, used by cybersecurity experts and attackers. Kaspersky found a cybercrime forum post sharing the utility for malicious purposes. 

Legitimate utilities are shared for research, but the ease of access can attract cyber criminals. Projects like WormGPT, XXXGPT, and FraudGPT, ChatGPT analogs without limitations which raise serious concerns. 

WormGPT faced community backlash and shut down, but fake ads offering access persist. These phishing pages falsely claim trial versions, demanding payment in various methods. However, despite the project closure, the developers warned against the scams.

WormGPT leads among projects like xxxGPT, WolfGPT, FraudGPT, and DarkBERT. A demo of xxxGPT allows custom prompts that generate code for threats like keyloggers. 

Despite the simplicity, the ease of generating malicious code raises alarms. Besides this, the stolen ChatGPT accounts flood the market, obtained from malware log files or hacked premium accounts. 

Sellers advise to not alter any details for persistent and undetected use. Automated accounts with API limits are sold in bundles that facilitate quick switches after bans for malicious activity.

Divya is a Senior Journalist at Cyber Security news covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.