Cyber Security News

Vigil: Open-source Security Scanner for LLM Models Like ChatGPT

An open-source security scanner, developed by Git Hub user Adam Swanda, was released to explore the security of the LLM model. This model is utilized by chat assistants such as ChatGPT.

This scanner, which is called ‘Vigil’, is specifically designed to analyze the LLM model and assess its security vulnerabilities. By using Vigil, developers can ensure that their chat assistants are safe and secure for use by the public.

As the name suggests, a large language model can understand and create any language. LLMs learn these skills by using huge amounts of data to learn billions of factors during training and by using a lot of computing power while they are studying and running.

Vigil Tool

If you want to ensure the safety and security of your system, Vigil is a useful tool for you. Vigil is a Python module and REST API that can help you identify prompt injections, jailbreaks, and other potential threats by evaluating Large Language Model prompts and responses against various scanners.

The repository also includes datasets and detection signatures, making it easy for you to begin self-hosting. With Vigil, you can rest assured that your system is secure and protected.

Currently, this application is in alpha testing and should be considered experimental.

Document
Protect Your Storage With SafeGuard

Is Your Storage & Backup Systems Fully Protected? – Watch 40-second Tour of SafeGuard

StorageGuard scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage and backup devices.

ADVANTAGES:

  • Examine LLM prompts for frequently used injections and inputs that present risk.
  • Use Vigil as a Python library or REST API
  • Scanners are modular and easily extensible
  • Evaluate detections and pipelines with Vigil-Eval (coming soon)
  • Available scan modules
  • Supports local embeddings and/or OpenAI
  • Signatures and embeddings for common attacks
  • Custom detections via YARA signatures

To protect against known attacks, one effective approach is using a Vigil to prompt injection technique. This method involves detecting known techniques used by attackers, thereby strengthening your defense against the more common or documented attacks.

Prompt Injection When an attacker generates inputs to manipulate a large language model (LLM), the LLM becomes vulnerable and unknowingly carries out the attacker’s aims. 

This can be done directly by “jailbreaking” the prompt on the system or indirectly by manipulating external inputs, which may result in social engineering, data exfiltration, and other problems.

If you want to load the vigil by appropriate datasets for embedding the model with the loader.py utility. 

The set scanners examine the submitted prompts; every one can help with the ultimate identification. Scanners are:

  • Vector database
  • YARA / heuristics
  • Transformer model
  • Prompt-response similarity
  • Canary Tokens

Experience how StorageGuard eliminates the security blind spots in your storage systems by trying a 14-day free trial.

Sujatha

Sujatha is a Cyber security content editor with a passion for creating captivating and informative content. With years of experience under her belt in Cyber Security, she is covering Cyber Security News, technology and other news.

Recent Posts

New Device Code Phishing Attack Exploit Device Code Authentication To Capture Authentication Tokens

A sophisticated phishing campaign, identified by Microsoft Threat Intelligence, has been exploiting a technique known…

1 hour ago

RedMike Hackers Exploited 1000+ Cisco Devices to Gain Admin Access

Researchers observed a sophisticated cyber-espionage campaign led by the Chinese state-sponsored group known as "Salt…

3 hours ago

AMD Ryzen DLL Hijacking Vulnerability Let Attackers Execute Arbitrary Code

A high-severity security vulnerability, identified as CVE-2024-21966, has been discovered in the AMD Ryzen™ Master…

3 hours ago

PostgreSQL Terminal Tool Injection Vulnerability Allows Remote Code Execution

Researchers have uncovered a high-severity SQL injection vulnerability, CVE-2025-1094, affecting PostgreSQL’s interactive terminal tool, psql. …

4 hours ago

WinZip Vulnerability Let Remote Attackers Execute Arbitrary Code

A newly disclosed high-severity vulnerability in WinZip, tracked as CVE-2025-1240, enables remote attackers to execute…

8 hours ago

Hackers Actively Exploiting New PAN-OS Authentication Bypass Vulnerability

Palo Alto Networks has released a patch for a high-severity authentication bypass vulnerability, identified as…

9 hours ago