Free Demo
  • Linkedin
  • Twitter
  • Youtube

Connect with a Daon solutions expert

Let us know how we can assist you

  • Product/Solution Information
  • Product Demonstration
  • Request for Proposal
  • Partnership Opportunities

See why many of the world’s strongest brands chose Daon to help them build lasting trust with their customers.

Deepfake

The ability of a bad actor to impersonate both the visual likeness and voice of another individual has become a serious hurdle in the effort to prevent identity fraud.

The basic function of identity verification and authentication is to definitively determine that a person is who they say they are—but what if their voice, appearance, and mannerisms are fabricated? Synthetic identity and deepfake technology have made that possibility a reality. When listening to audio or watching video, one can no longer assume that the person they are hearing or seeing is, in fact, the person they perceive them to be.

Deepfake technology employs sophisticated machine learning algorithms to build a highly realistic representation of an individual. Deepfake algorithms are trained to learn the details of how a person looks and sounds through the injection of preexisting audio and video clips. Initially, these clips were manually gathered, but more advanced technology has opened the door to automatic gathering through “crawling” the internet for representative material. Once the algorithm completes a functional model, it can then be overlaid onto a live feed or recording of another person, matching that person’s movements and speech patterns, and allowing for an extremely realistic impersonation.

Deepfake technology first turned up in 1997 and was used as a way to manipulate a video to match mouth movements to a replacement audio track. Fast forward to 2014, where the deepfakes being created were significantly more sophisticated, but required hours of footage, shot from multiple angles and in multiple lighting scenarios, an hour or more of audio (often with the speaker intonating specific phrases), and significant computing power to support the AI. In half that time we have advanced to the AI-driven deepfakes we know today, and the development timeline keeps shrinking exponentially. Now, practically anyone can make a relatively believable deepfake. Six months from now, it will be even easier—and more realistic.

While the nature of this technology is certainly impressive, the potential dangers and uncertainties surrounding its misuse cannot be ignored. From reputational damage due to public-facing manipulations of content to financial loss from individual-level fraud, the various scenarios are frightening. People, businesses, and even governments can be negatively affected by this technology as it falls into the hands of bad actors. What is certain is that human defenses against fraud have been significantly eroded by the influx of deepfakes. Businesses can no longer count on their traditional methods of detection—their workers—to flag manipulated content. They need to turn to technology for a solution.

Companies that specialize in identity assurance are now stepping in as the first line of defense against the misuse of deepfakes. As many of these companies also leverage machine learning, they’ve been developing their own algorithms that can determine if video or audio has been altered by deepfake technology. Daon has developed a family of solutions for this specific purpose called AI.X. Included are a range of technologies from presentation attack detection algorithms in our core IDV and authentication products, to our stand-alone xSentinel product that can determine if a voice is human or synthetically generated. The patented algorithms that drive these tools not only understand the difference between live feeds and those that are manipulated, but they are constantly learning from the best data possible. As fraudsters find ways to make deepfakes more realistic, the AI.X family of solutions adjust to recognize newer, more advanced fraud tactics.

Test your skill and see if you can determine if what you hear is a real voice or synthetic audio.

Daon employs a number of tools to defend against the misuse of deepfake technology including xAuth, xFace, xProof, and xVoice.
xAuth provides tools for multi-factor authentication, ensuring a business isn’t relying on a single point of authentication.

Learn About xAuth

xFace uses multiple forms of anti-spoofing that prevent deepfake technology from being used for face biometrics.

Learn About xFace

xProof uses multiple forms of anti-spoofing and ID matching to prevent onboarding with deepfake.

Learn About xProof

xVoice uses multiple forms of anti-spoofing to prevent authentication using recorded or altered voices.

Learn About xVoice

xSentinel analyzes incoming audio on any voice channel to determine if it is human or synthetically generated.

Learn About xSentinel

Frequently Asked Questions

Are my individual customers at risk of deepfake attacks?
Currently, probably not; but, in the future, it’s very likely. Today’s deepfake attacks are not scalable to the point of targeting random individuals, but the technology already exists to create that kind of scenario. It’s important to be one step ahead to protect your customers when the technology gets to that level of sophistication and accessibility.

Doesn’t existing liveness detection stop deepfakes?
For the most part, yes, but no technology is 100% certain. Just like with authentication, it’s important to employ multiple checks to make sure nothing slips between the cracks.

There is no audio or video of me on the Internet, so I’m safe, right?
Are you sure? If so, then yes. Deepfakes cannot be made without source material. With the proliferation of smartphones, however, it’s nearly impossible to say that you’ve never been recorded. Also, the amount of audio/video required to create a deepfake is getting smaller and smaller.

Have more questions about deepfakes?

Connect with one of our technology experts