
For the last few months, scammers have been using AI-generated voice messages to dupe unsuspecting people into believing they have received a message from a government official. The intention of this scam appears to be the acquisition of private information, including login credentials and personal contacts. The scam campaign has become so prevalent that the FBI has issued a warning to the nation.
“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” the FBI said in its advisory notice. “The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts.”
$25 million stolen via AI deepfake video conference
The FBI warns that a common sign of the campaign is the attempt to get the target to transition to a separate messaging platform, where the malicious actor will then be able to take control of the victim’s information. The FBI has offered the following tips on how people can stay vigilant and spot a suspicious deepfake message:
- Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
- Carefully examine the email address; messaging contact information, including phone numbers; URLs; and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact.
- Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.
- Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical.
- AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.