Registration is wrong! Please check form fields.
Publication detail:

Legal aspects of deploying voice biometrics and other speech technologies in connection with GDPR enforcement

Legal aspects of deploying Voice Biometrics and other Speech Technologies in connection with GDPR Enforcement

Many businesses feel the need to strike the right balance between a reasonably high level of security in accessing customer data, and positive customer experience – an easy, hassle-free access to data and services, which at the same time is perceived as sufficiently secure by customers themselves. Using voice biometrics and other speech technologies is often regarded as the most secure and economically attractive probabilistic authentication procedure, which can also be easily combined with deterministic authentication methods. Crucially, voice biometrics is also the only biometric authentication method that can be used remotely.

There are certain standards prescribed by law in different jurisdictions, international regulations, as well as important guidelines and rules to be followed, which may vary from country to country.

In May 2018, the EU General Data Protection Regulations (GDPR), which cover voice biometrics data (e.g. voiceprints linked to names of EU citizens) and identity verification processes will come into force. Considering that very limited time is left until enforcement, it might be helpful to run through at least the main questions that my colleagues and I hear from clients and partners. Spitch is ready to share the answers that we normally give by way of general guidance. Please contact us to request a more detailed white paper1.

For many of our clients, the legal aspects of using voice biometrics are closely intertwined not only with the issues around personal data protection, but data security in general, and especially multi-factor authentication issues. It is important to remember that voice biometric solutions deliver a greater level of security compared to traditional verification procedures based on security questions (“something you know” only). Human voice is also much harder to imitate perfectly (so that a biometric system fails to detect the difference) than a fingerprint, for example. In live conversations with call centre agents, it is a virtually impossible task for any presently existing software2. Besides, voice biometrics is easy to combine with any other authentication factor in multi-factor solutions, e.g. with a knowledge-based password, randomly generated numeric codes or phrases etc. to prevent pre-recorded spoofing attacks.

Creating a voiceprint means using the voice characteristics of a person to build a mathematical model of the voice (voiceprint). The mathematical model can only be used by a concrete biometric identification/verification solution to compare a sample of a few seconds of one’s live voice (instantaneously converted into a mathematical model) with the model stored in the system’s database of voiceprints. It does not make sense, therefore, to attack a system in order to obtain voiceprints for fraudulent verifications, as it is the live voice that is required by the system for a successful verification.

Biometric data including voiceprints linked to names of EU citizens will be treated from May 2018 onwards as sensitive personal data under the GDPR.  Any company that stores or processes voice biometric data of EU citizens, therefore, must comply with the GDPR, even if they do not have a business presence within the EU. Furthermore, personal data, including voiceprints, must be erased immediately upon request of an individual. Specialists at Spitch ensure that compliance processes under the GDPR and national regulations are automated and easily reviewed and revised should the regulations change.

The fundamental fact is that companies using voice biometrics and handling it properly will eventually find themselves among those that are less affected by GDPR, with less significant cost implications. Organizations will also have very strong incentives to employ data pseudonymisation techniques (e.g. hashing, encryption, etc.) under the GDPR to mitigate their compliance obligations and manage their risks. Pseudonymous data is still considered a type of personal data and is, therefore, subject to the requirements of the GDPR, but organizations that pseudonymise their data will benefit from relaxations of certain provisions of the GDPR, in particular with respect to data breach notification requirements.

It is critical also to be aware of the fact that in some banking use cases, e.g. in voice-operated mobile banking apps powered by Siri, it is either difficult or impossible to ensure that customers’ personal data, including sensitive personal data, will not be transferred across borders, leading to a potential breech of GDPR and other data privacy regulations, irrespective of the destination country. The “data controller” (the company) is keeping the full responsibility for the data processing, even if some parts of the processing are outsourced to a third party (the “data processor”), e.g. some US firms. It is advisable, therefore, to ensure that the company retains tight control over their customer data and process it by themselves or under closed control at least inside the country of origin.

Our specialists at Spitch will be happy to consult on any further questions and issues related to voice biometrics and personal data protection, including on how to deploy solutions, which do not only allow businesses to ensure full compliance with the changing regulatory environment, but also let you do it in the most cost-effective way. 

1 – It should be noted that there is no alternative to seeking professional legal advice on concrete implementations to ensure full compliance with national and international legislation, including GDPR.

2 – Professional artists can imitate a voice of another person but not all the key individual characteristics (over 100 parameters) of the voice. Biometric systems, therefore, easily detect the differences.