ComfyUI Users Targeted by Malicious Code Designed to Steal Login Credentials

The research team has recently reported a concerning incident involving the popular Stable Diffusion user interface, ComfyUI.

This event has sent shockwaves through the AI community, highlighting the potential dangers lurking behind seemingly innocuous tools.

EHA

While ComfyUI itself remains secure, a malicious custom node uploaded by a user going by “u/AppleBotzz” on Reddit underscores the critical need for vigilance when integrating third-party components into AI workflows.

The tool code was reviewed by our team, and the findings were confirmed.

The Malicious Node: ComfyUI_LLMVISION

According to the vpnMentor report, the “ComfyUI_LLMVISION” node, disguised as a helpful extension, contained code designed to steal sensitive user information, including browser passwords, credit card details, and browsing history.

This stolen data was then transmitted to a Discord server controlled by the attacker.

Disturbingly, the malicious code was cleverly concealed within custom install files for OpenAI and Anthropic libraries, masquerading as legitimate updates, making detection difficult even for experienced users.

Adding to the severity of the situation, the Reddit user who uncovered the malicious activity, u/roblaughter, revealed they fell victim to the attack.

They reported experiencing a wave of unauthorized login attempts on their accounts shortly after installing the compromised node.

This personal account underscores the real and immediate danger such malicious actors pose.

Analyze any MaliciousURL, Files & Emails & Configuration With ANY RUN Start your Analysis

Securing Your Device After Potential Exposure

The Reddit user who exposed this malicious node provided concrete steps for users who suspect they might have been compromised:

  1. Check for Suspicious Files: Search your system for specific files and directories mentioned in the original Reddit post. The malicious node often uses these files to store stolen data.
  2. Uninstall Compromised Packages: Remove suspicious packages, specifically those mimicking OpenAI or Anthropic libraries but with unusual version numbers.
  3. Scan for Registry Alterations: The malicious node may create a specific registry entry. The original Reddit post provides instructions on how to check and clean this.
  4. Run a Malware Scan: Utilize reputable anti-malware software to thoroughly scan your system for any remnants of the malicious code.
  5. Change All Passwords: As a precaution, change passwords for all your online accounts, particularly those related to financial transactions. If you think your banking details or credit card information may have been compromised, contact your bank, inform them of the situation, and cancel your card.

Mitigating Risks with Third-Party AI Tools

To mitigate the risks associated with using third-party AI tools, users should:

  • Exercise Extreme Caution: Always verify the authenticity of the source, even within seemingly trustworthy communities.
  • Stick to Reputable Repositories and Developers: Look for well-established sources with a proven track record of security and reliability.
  • Thoroughly Inspect the Code: While this requires a degree of technical knowledge, it is the most effective way to identify potentially malicious activity.
  • Regularly Scan Your System for Malware: Utilize reputable antivirus and anti-malware software to detect and remove threats.
  • Use Strong, Unique Passwords: Enable two-factor authentication whenever possible to add an extra layer of security.

What Investigation Shows

packages are installed by the python package manager.
packages are installed by the python package manager.

When the malicious custom node is first installed to ComfyUI, the following packages are installed by the Python package manager:

  • ConfyUI 1: These links are not for the real OpenAI and Anthropic Python packages but to malicious versions uploaded by the same user.
  • ConfyUI 2: The malicious imitations of the OpenAI Python package contain a function that runs an encoded PowerShell command.
  • ConfyUI 3: This command downloads the third stage of the malware using PowerShell and runs it.
  • ConfyUI 4: VirusTotal of the third stage.

Aside from downloading the next stage of the malware, the second stage has malicious capabilities of its own. It can:

  • Steal crypto wallets.
  • Screenshot the user screen and send it to a malicious webhook.
  • Steal plenty of device information, such as processor brand, location, total CPU usage, size of available memory, and more.
  • Get IP info, a list of files and directories, contents of the user clipboard, and more.
  • Steal files that contain certain keywords or have certain extensions.

The future of AI holds incredible promise, but it is our responsibility to navigate this landscape with enthusiasm and caution.

By staying informed, remaining vigilant, and adopting proactive security measures, users can harness the power of AI while mitigating the risks posed by those seeking to exploit this transformative technology for malicious purposes.

Recent developments, such as a new AI tool called FraudGPT being sold on the Dark Web, the use of AI to generate phishing emails, and instances where Bing’s AI chat responses were hijacked by malvertising, highlight the importance of understanding and addressing the potential risks associated with AI advancements.

By proactively addressing security concerns and promoting responsible AI practices, we can fully realize the benefits of this innovative technology while safeguarding against its misuse.

Looking for Full Data Breach Protection? Try Cynet's All-in-One Cybersecurity Platform for MSPs: Try Free Demo

Divya is a Senior Journalist at Cyber Security news covering Cyber Attacks, Threats, Breaches, Vulnerabilities and other happenings in the cyber world.