FBI Alert of Malicious Campaign Impersonating U.S. Officials Points to the Urgent Need for Identity Verification

On May 15, 2025, the FBI issued a Public Service Announcement (PSA) alerting the public to an ongoing threat, with malicious actors “impersonating senior U.S. officials to target individuals, many of whom are current or former senior U.S. federal or state government officials and their contacts.”
The FBI PSA states that cybercriminals are sending texts and AI-generated voice messages appearing to come from a senior government official to establish trust with the victim before gaining access to personal accounts.
Specifically, it says: “One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. Access to personal or official accounts operated by U.S. officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain. Contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds.”
The common tactics employed by the threat actors are smishing – the malicious targeting of individuals using Short Message Service (SMS) or Multimedia Message Service (MMS) text messaging – and vishing – the malicious targeting of individuals using voice memos which may incorporate AI-generated voices. The AI-generated messages with fake audio and video content that seems real are also called deepfakes. Deepfakes are increasingly challenging to identify because they use existing content to create deceptive messages.
The dangerous rise of deepfakes according to Ponemon
As described in the recent Ponemon report, Faked to Perfection: The Rise of Deepfake Threats against Corporate Executives, a deepfake is an artificial image or video generated by a form of AI. Attackers typically collect authentic media samples – still images, videos and audio clips – of their target to use as training material for the deep learning model and create deceptive messages that appear to be authentic.
While the Ponemon report focuses on the escalation of deepfake threats targeting business leaders and executives, it demonstrates the growing risk of such threats and their potential impact on organizations and individuals. The latest scam reported by the FBI is a real-world example of how AI-generated malicious content can result in financial and reputational harm to anyone who is targeted, be it a government official, corporate executive, professional athlete, or anyone with an online presence (and potentially high net worth).
Safety and security with BlackCloak’s Identity Verification
The FBI offers a number of steps individuals can take to protect themselves, the first being “Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.”
This is an area where BlackCloak can help. Just last month, we launched our new Identity Verification capability, which will be available to BlackCloak clients through our mobile application in June. This valuable tool relieves the recipient of the burden of the legwork and validates the authenticity of suspicious messages with a simple click of a button.
Members who receive a suspicious message – whether it be an email, text (including voice memos), or phone call – can use the BlackCloak mobile app to guarantee the origin of a call or message by automatically sending an out-of-band message to the purported sender. This prompts the trusted contact to verify that the message was indeed sent by them, or not. Once the “sender” verifies or denies the authenticity of the message, the recipient can proceed accordingly, with peace of mind knowing their privacy, data, and company are protected from attack.
Some messages may not seem suspicious on the surface, but there are ways to determine if a communication warrants further examination. Subtle imperfections in images and videos – such as distorted hands or feet, unnatural facial features, blurred or irregular faces, unrealistic accessories like glasses or jewelry, incorrect shadows, visible watermarks, voice delays, mismatched vocal tone, and awkward or unnatural movements – can all signal synthetic media manipulation.
This latest FBI-reported scam is just one example of how cybercriminals are using highly sophisticated methods to fool people into revealing sensitive information and/or making financial transactions to a nefarious entity. Such attack methods will only broaden, and the types of targets they go after will continue to expand. Tools like BlackCloak’s Identity Verification feature are a significant step towards blocking hackers’ success.
You can learn more about our Identify Verification capability here.