New research reveals that threat actors are exploiting exposed cloud credentials to hijack enterprise AI systems within minutes of credential leakage. Recent incidents have demonstrated that attackers can compromise large language model (LLM) infrastructure in under 19 minutes.
Dubbed LLMjacking, this attack vector targets non-human identities (NHIs) – API keys, service accounts, and machine credentials – to bypass traditional security controls and monetize stolen generative AI access.
Security firm Entro Labs recently exposed functional AWS keys across GitHub, Pastebin, and Reddit to study attacker behavior.
Their research uncovered a systematic four-phase attack pattern:
Credential Harvesting: Automated bots scan public repositories and forums using Python scripts to detect valid credentials, with 44% of NHIs exposed via code repositories and collaboration platforms.
Rapid Validation: Attackers performed initial API calls like GetCostAndUsage within 9-17 minutes of exposure to assess account value, avoiding predictable calls like GetCallerIdentity to evade detection.
Model Enumeration: Intruders executed GetFoundationModelAvailability requests via AWS Bedrock to catalog accessible LLMs – including Anthropic’s Claude and Amazon Titan – mapping available attack surfaces.
Exploitation: Automated InvokeModel attempts targeted compromised endpoints, with researchers observing 1,200+ unauthorized inference attempts per hour across experimental keys.
The Storm-2139 cybercrime group recently weaponized this methodology against Microsoft Azure AI customers, exfiltrating API keys to generate dark web content. Forensic logs show attackers:
Entro’s simulated breach revealed attackers combining automated scripts with manual reconnaissance – 63% of initial accesses used Python SDKs, while 37% employed Firefox user agents for interactive exploration via AWS console.
Uncontained LLMjacking poses existential risks:
With attackers operationalizing leaks in under 20 minutes, real-time secret scanning and automated rotation are no longer optional safeguards but critical survival mechanisms in the LLM era.
Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.
A newly discovered malware campaign is targeting Docker environments, employing a sophisticated, multi-layered obfuscation technique…
The pace of technological change in today’s business environment is unprecedented. Organizations are racing to…
Cyber risk appetite represents the amount and type of cyber risk an organization is willing…
A new campaign by Russian threat actors. These actors are exploiting legitimate Microsoft OAuth 2.0…
Security researchers at Fortinet's FortiGuard Labs have uncovered a sophisticated phishing campaign that uses weaponized…
British retail giant Marks & Spencer (M&S) has confirmed it is dealing with a significant…