Generative AI’s ChatGPT rapid growth is actively reshaping the current threat landscape, as hackers are exploiting it for several illicit purposes.
Shortly after ChatGPT disrupted startups, hackers swiftly developed their own versions of the text-generating technologies based on OpenAI’s ChatGPT.
All these advanced AI systems could be exploited by threat actors that enable them to craft sophisticated malware and phishing emails to steal login information from their targets by tricking them.
Hackers are Creating ChatGPT Clones
Several dark web posts since July were observed by security researchers promoting threat actors’ self-made large language models (LLMs), mimicking:-
- Google Bard
However, all these chatbots developed by hackers generate text responses for illegal purposes, unlike their legitimate counterparts.
Chatbot authenticity is questioned due to cybercriminals’ lack of trustworthiness, and not only that; even they are the potential for scamming or exploiting AI hype which raises serious concerns.
Moreover, security researchers are actively training several chatbots with dark web data, and along with it, they are also using large language models to fight against cybercrime and create strong defense mechanisms.
Malicious AI Chatbots Discovered Yet
Here below, we have mentioned all the malicious AI chatbots that have been discovered yet by cybersecurity researchers:-
WormGPT, which is spotted by researcher Daniel Kelley, lacks safeguards and ethical limits. While all these models are developed for phishing, as it lowers the barriers for newbie cybercriminals, offering unlimited characters and code formatting.
When tested by Kelley, the system generated a convincing and strategically sharp email for a business email compromise scam, producing alarming effective results.
The creator of the FraudGPT highlighted the key features, and here below, we have mentioned them:-
- Undetectable malware creation
- Leak finding
- Scam text crafting
Besides this, on multiple dark-web forums and Telegram channels, the creator advertised the FraudGPT. The system’s creator shared a video demonstrating a chatbot generating scam email, attempting to sell system access for $200/month or $1,700/year.
The authenticity of these chatbots is hard to confirm since chatbot claims are questionable due to scammers scamming each other.
While some hints suggest that WormGPT’s seller seems relatively reliable, FraudGPT’s credibility is less certain, with removed posts from the seller.
Apart from this, the cybersecurity researchers at Check Point doubt systems surpass commercial LLMs like ChatGPT or Bard.
The interest of the threat actors in LLMs seems to be booming dramatically, so it’s not unexpected.
These advancements also warned the FBI and Europol about the generative AI’s potential for faster fraud, impersonation, and social engineering in cybercrime.