The Federal Bureau of Investigation (FBI) issued an urgent warning Thursday about an ongoing malicious campaign where cybercriminals are impersonating senior US officials through text messages and AI-generated voice calls.
The sophisticated attack, which began in April 2025, primarily targets current and former federal and state government officials and their contacts, raising serious concerns about potential information theft and financial fraud.
“If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI warned in its public service announcement.
Authorities are particularly concerned as these attacks could compromise government communication channels and sensitive information. The campaign employs two primary tactics labeled as “smishing” and “vishing.”
Smishing combines SMS texting with phishing techniques, while vishing uses voice messages, often enhanced with AI-generated deepfakes that can convincingly mimic the voices of senior officials.
This technology has advanced to the point where the differences between authentic and simulated speakers are often indistinguishable without trained analysis.
US Govt Officials Impersonated
“The malicious actors have sent text messages and AI-generated voice messages that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts,” detailed the FBI announcement.
One common approach involves the attacker sending malicious links disguised as invitations to switch to a separate messaging platform.
Security experts note this scam is particularly dangerous because attackers can target other government officials using trusted contact information once they gain access to an official’s account.
The compromised credentials allow cybercriminals to execute further attacks, elicit sensitive information, or solicit funds through impersonation.
The FBI has confirmed that many targeted individuals are “current or former senior US federal or state government officials and their contacts,” indicating a sophisticated operation potentially aimed at compromising government communications.
To help identify suspicious messages, the FBI advises verifying the sender’s identity independently rather than using the contact information provided in the message.
Recipients should examine URLs, email addresses, and phone numbers carefully for slight differences that might indicate fraud.
For voice messages, the public is encouraged to listen closely to tone and word choice, watching for subtle imperfections that might reveal AI generation, though the FBI acknowledges that “AI-generated content has advanced to the point that it is often difficult to identify”.
The bureau also recommends implementing protective measures including never sharing sensitive information with people met only online, not clicking on unverified links, using two-factor authentication, and verifying any unusual requests for money or information through previously established communication channels.
This warning comes amid rising concerns about deepfake technology. The FBI cautioned in December 2024 that criminals were increasingly using artificial intelligence to generate text, images, audio, and video for fraudulent purposes.
The announcement concluded, “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”
Vulnerability Attack Simulation on How Hackers Rapidly Probe Websites for Entry Points – Free Webinar