Researchers Detailed Red Teaming Malicious Use Cases For AI

Researchers investigated potential malicious uses of AI by threat actors and experimented with various AI models, including large language models, multimodal image models, and text-to-speech models. 

Importantly, they did not fine-tune or provide additional training to the models, simulating the resources threat actors might have access to and suggesting that in 2024, the most likely threats will involve deep fakes and influence operations. 


The deep fakes could be used to impersonate executives and could be created with open-source tools, whereas AI-generated audio and video could be employed to enhance social engineering campaigns. 

AI-powered Social Engineering Attacks

Recorded Future’s Insikt Group predicts a rise in AI-powered social engineering attacks in 2024, where an open-source deepfake tool will enable the impersonation of executives and the creation of realistic audio/video content, boosting social engineering campaigns. 

Malicious actors will utilize AI to create fake media outlets and clone websites at a lower cost and AI could help malware developers avoid detection and assist threat actors in identifying vulnerabilities and locating sensitive targets. 

The advancements call for the creation of effective security measures for artificial intelligence in order to combat the new risks that are developing.


Free Webinar : Mitigating Vulnerability & 0-day Threats

Alert Fatigue that helps no one as security teams need to triage 100s of vulnerabilities. :

  • The problem of vulnerability fatigue today
  • Difference between CVSS-specific vulnerability vs risk-based vulnerability
  • Evaluating vulnerabilities based on the business impact/risk
  • Automation to reduce alert fatigue and enhance security posture significantly

AcuRisQ, that helps you to quantify risk accurately:

Open-source generative AI models are approaching the effectiveness of commercial solutions, potentially democratizing deepfake creation and increasing the number of malicious actors.  

Security vulnerabilities also exist in commercial generative AI products, making them susceptible to exploitation. 

Factors, coupled with rising investments in generative AI across industries, will empower attackers with more sophisticated tools regardless of their resources, which will significantly increase the number of organizations vulnerable to deepfake attacks. 

Organizations are facing an evolving threat landscape where attackers are exploiting digital assets beyond traditional security perimeters.

It necessitates expanding the attack surface to include executives’ voices and likenesses, website content and branding, and overall public image, as these can be manipulated for social engineering attacks

The rise of sophisticated AI-powered threats, such as self-modifying malware that bypasses detection systems, demands more inconspicuous security solutions that can stay ahead of attackers’ tactics.

Key Findings:

Adversarial actors can leverage AI for malicious purposes, including deepfakes for impersonating executives, which is achievable with short training clips using open-source tools, but real-time manipulation presents hurdles. 

AI facilitates large-scale disinformation campaigns and the cloning of legitimate websites, although human effort is needed to craft convincing forgeries.

Malware can utilize generative AI to obfuscate code and bypass detection, but maintaining functionality after such alterations remains a challenge. 

Multimodal AI can analyze publicly available images for reconnaissance, whereas extracting actionable intelligence from this data still necessitates human expertise.

Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.

Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.