EHA
ChatGPT for Vulnerability Detection – Prompts Used and their Responses

ChatGPT for Vulnerability Detection – Prompts Used and their Responses

Software vulnerabilities are essentially errors in code that malicious actors can exploit. Advanced language models such as CodeBERT, GraphCodeBERT, and CodeT5 can detect these vulnerabilities, provide detailed analysis assessments, and even recommend patches to...
The Security Dimensions of Adopting LLMs

The Security Dimensions of Adopting LLMs

The incredible capabilities of LLM (Large Language Models) enable organizations to engage in various useful activities such as generating branding content, localizing content to transform customer experiences, precise demand forecasting, writing code, enhanced supplier...
Google Detailed Dangerous Red Team Attacks to Hack AI Systems

Google Detailed Dangerous Red Team Attacks to Hack AI Systems

Google Detailed Dangerous Red Team Attacks to Hack AI Systems. Pursuing innovation demands clear security standards in the public and private sectors for responsibly deploying AI technology, ensuring secure AI models. With the rapid rise...
OpenAI Released ChatGPT Enterprise With SOC 2 Compliant & Data Encryption

OpenAI Released ChatGPT Enterprise With SOC 2 Compliant & Data Encryption

Several reports have indicated data leakage from ChatGPT ever since its release by the Microsoft-backed OpenAI in November 2022. Additionally, threat actors have been abusing the platform to gain unauthorized access or leak sensitive...
Researchers Detailed Red Teaming Malicious Use Cases For AI

Researchers Detailed Red Teaming Malicious Use Cases For AI

Researchers investigated potential malicious uses of AI by threat actors and experimented with various AI models, including large language models, multimodal image models, and text-to-speech models.  Importantly, they did not fine-tune or provide additional training...
PyRIT : Automated AI Toolkit For Security Professionals

PyRIT : Automated AI Toolkit For Security Professionals

A new Python automation framework has been released for risk identification in generative AI. This new framework has been named "PyRIT," and it can help security professionals and machine learning engineers find risks in...
HackerGPT 2.0

HackerGPT 2.0 – A ChatGPT-Powered AI Tool for Ethical Hackers & Cyber Community

HackerGPT is an advanced AI tool created specifically for the cybersecurity industry, handy for individuals engaged in ethical hacking and cyber security research like bug bounty hunters. This sophisticated assistant is at the forefront of...
Researchers Hacked Google A.I: Earned $50,000 Bounty

Researchers Hacked Google A.I: Earned $50,000 Bounty

At Google's LLM bugSWAT event in Las Vegas, researchers uncovered and reported bugs in the company's Bard AI (formerly known as Gemini) and received a $50,000 reward. Roni Carta, Justin Gardner, and Joseph Thacker worked...
Secure AI System Development

CISA & NCSC Discloses Guidelines for Secure AI System Development

The US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) released the Guidelines for Secure AI System Development to address the integration of artificial intelligence (AI), cybersecurity, and...
Hackers Have Earned More Than $300 Million on the HackerOne Platform

Hackers Have Earned More Than $300 Million on the HackerOne Platform

The ethical hacking community has earned $300 million in total all-time rewards on the HackerOne platform. In addition, thirty hackers have made over a million dollars on the network; one hacker's total profits have surpassed four...

Managed WAF

Website

Latest News