Protect against presentation attacks with a key technology behind performing secure identity proofing and authentication via facial and voice biometrics.
Today’s liveness detection is powered by AI-based algorithms that are trained to distinguish the face or voice of a real human from a presentation attack. A presentation attack is when a fraudster uses masks, photos, videos, or voice recordings, combined with ever-sophisticated technology, to pass themselves off as a genuine person with a “true” identity in order to commit identity fraud.
There are two types of liveness detection: active and passive. Active liveness detection prompts a user to perform some kind of simple action during the verification process; typically, while in the native app of the organization to which they are attempting to onboard, the user will be asked to hold up and/or move their identity document in front of the camera on their smartphone. This process takes mere seconds, and liveness detection technology is now so sophisticated that, in most cases, it is not even necessary for the user to record a video. In that short timeframe, AI-based algorithms track the pupil movement of the user as they glance at their identity document, lift it up with their hand, and look at the camera. AI analyzes key points on the user’s face to determine whether the user is a genuinely “live” person or a bot/fraudster using a silicon mask or a pre-recorded photo or video to attempt to onboard fraudulently. Active liveness is also used in conjunction with voice biometrics, where a caller to an organization’s contact center may be asked to say and then repeat (typically 2-3 times) a particular phrase, like, “My name is Mary, and this is my voice – authenticate me” in order to set up the voice print that will be used to authenticate them in the future. During future interactions with their organization, the user simply calls in and repeats the phrase, and AI-based algorithms that are trained to determine if a voice is a true person or just a recording will authenticate the user as genuine.
Passive liveness detection operates in the background of the face or voice capture process during identity proofing or authentication. It does not give the user any indication that any kind of liveness check is being performed and offers less friction for the user and a quicker capture process. Unlike active liveness, which asks the user to complete an action (like moving their head or saying a phrase), passive liveness analyzes the content of a user’s facial or voice biometric input via AI neural networks that assess elements like shadows, colors, audio artifacts, and textures of the user’s skin or the pitch, tone, and cadence of the user’s voice, respectively. Passive liveness detection is often a quicker process than active liveness detection, but both are extremely useful in protecting users and organizations from fraudsters and fraudulent account activity or onboarding.
Liveness detection is a critical component of secure and user-friendly identity proofing and authentication processes. We use multiple methods of enhanced liveness detection in our xFace, xAuth, xProof, and xVoice products to help secure organizations and their users against fraudsters. When combined with some of our other technologies, like facial biometrics, liveness detection leads to safe and powerful onboarding and authentication.
Frequently Asked Questions
Why is liveness detection important?
Liveness detection is one of several anti-spoofing processes that are used to prevent fraudsters from trying to trick a biometric system into providing fraudulent access, typically by using tactics like video or audio recordings, photocopies or other non-original documents, photographs, and even live people wearing masks.
Which type of liveness detection is better: passive or active?
They both have their limitations and advantages. Because it is more obvious to the user, active liveness detection is easier to try to spoof, but passive liveness detection can be affected by variables in the environment where the image or voice are captured. That’s why Daon employs a number of anti-spoofing techniques, simultaneously, to maximize the protection of our system.
If someone physically can’t complete an active liveness step, can they still use identity proofing and authentication?
Yes. This is another reason we don’t rely on just one type of anti-spoofing technology. Our systems leverage AI to analyze multiple factors in order to determine if the image or voice being captured is live and unaltered.