Resemble AI has revealed an alarming surge in AI social engineering, reporting over $200 million in financial losses from deepfake incidents within the first quarter of 2025 alone.
Furthermore, the Deepfake Incident Report states that 68% of the analyzed were nearly indistinguishable from authentic media, as voice cloning becomes so easy that it only requires between three and five seconds of audio to reproduce someone’s voice.
The most common deepfakes are video-based (46%), followed by image (32%) and audio (22%).
The malicious actors used AI mainly with five objectives:
The remaining 8% includes various types of harmful applications, such as harassment, bullying, and corporate sabotage.
The report also emphasizes the expansion of targets beyond high-profile individuals. Although politicians and celebrities are still the lead target, over 30% of the attacks targeted ordinary individuals.
Deepfakes are more common in the U.S. (38%), but the hazard is worldwide, with all continents suffering from this new technology.
More importantly, over 60% of the incidents were cross-border, making it harder to detect, mitigate, and enforce the law. It also means international cooperation is needed to protect citizens from this menace.
Some policies and laws have been passed to respond to this new threat. However, Resemble AI calls for several actions to mitigate the risk of deepfakes, including:
As deepfake technology keeps evolving, it’s key that we respond swiftly. If not, it’s expected that the financial loss and amount of misinformation will rise even higher.
At Best Reviews, we’re advocates for protecting our online privacy and security. We highly recommend considering investing in an identity theft service as a way to combat deepfake technology and keep yourself protected as we navigate these unknown and frightening waters.
Share your thoughts, ask questions, and connect with other users. Your feedback helps our community make better decisions.
©2012-2025 Best Reviews, a clovio brand –
All rights
reserved