The research team has recently reported a concerning incident involving the popular Stable Diffusion user interface, ComfyUI.
This event has sent shockwaves through the AI community, highlighting the potential dangers lurking behind seemingly innocuous tools.
While ComfyUI itself remains secure, a malicious custom node uploaded by a user going by “u/AppleBotzz” on Reddit underscores the critical need for vigilance when integrating third-party components into AI workflows.
The tool code was reviewed by our team, and the findings were confirmed.
According to the vpnMentor report, the “ComfyUI_LLMVISION” node, disguised as a helpful extension, contained code designed to steal sensitive user information, including browser passwords, credit card details, and browsing history.
This stolen data was then transmitted to a Discord server controlled by the attacker.
Disturbingly, the malicious code was cleverly concealed within custom install files for OpenAI and Anthropic libraries, masquerading as legitimate updates, making detection difficult even for experienced users.
Adding to the severity of the situation, the Reddit user who uncovered the malicious activity, u/roblaughter, revealed they fell victim to the attack.
They reported experiencing a wave of unauthorized login attempts on their accounts shortly after installing the compromised node.
This personal account underscores the real and immediate danger such malicious actors pose.
Analyze any MaliciousURL, Files & Emails & Configuration With ANY RUN : Start your Analysis
The Reddit user who exposed this malicious node provided concrete steps for users who suspect they might have been compromised:
To mitigate the risks associated with using third-party AI tools, users should:
When the malicious custom node is first installed to ComfyUI, the following packages are installed by the Python package manager:
Aside from downloading the next stage of the malware, the second stage has malicious capabilities of its own. It can:
The future of AI holds incredible promise, but it is our responsibility to navigate this landscape with enthusiasm and caution.
By staying informed, remaining vigilant, and adopting proactive security measures, users can harness the power of AI while mitigating the risks posed by those seeking to exploit this transformative technology for malicious purposes.
Recent developments, such as a new AI tool called FraudGPT being sold on the Dark Web, the use of AI to generate phishing emails, and instances where Bing’s AI chat responses were hijacked by malvertising, highlight the importance of understanding and addressing the potential risks associated with AI advancements.
By proactively addressing security concerns and promoting responsible AI practices, we can fully realize the benefits of this innovative technology while safeguarding against its misuse.
Looking for Full Data Breach Protection? Try Cynet's All-in-One Cybersecurity Platform for MSPs:
Try Free Demo
A novel cryptomining campaign has been identified that exploits misconfigured Jupyter Notebooks, targeting both Windows…
Amazon Web Services Simple Notification Service (AWS SNS) has emerged as a new vector for…
Cybersecurity researchers have discovered that DeepSeek R1, an open-source large language model, can be manipulated…
The rise of remote work has significantly increased the attack surface for cybercriminals, making robust…
A new, surprisingly simple method called Context Compliance Attack (CCA) has proven effective at bypassing…
A Russian-speaking actor using the Telegram handle @ExploitWhispers leaked internal chat logs of Black Basta…