You are here


Biometric technologies capture and store the physiological and behavioral characteristics of individuals. Characteristics may include voice and facial identifiers, iris patterns, DNA profiles and fingerprints. When stored in a database these characteristics can be paired to individuals for later identification and verification. When adopted in the absence of strong legal frameworks and strict safeguards, biometric technologies pose grave threats to privacy and personal security, as their application can be broadened to facilitate discrimination, social sorting and mass surveillance, and the varying accuracy of the technology can lead to misidentification, fraud and civic exclusion. As such, it is crucial that the export of biometric technologies is regulated and their use is scrutinised.

An individual’s voice is determined by learned speech patterns and anatomy, such as the size and shape of their vocal chords and throat. If an individual’s voice– an entirely unique sound– is recorded its frequency pattern and spectrum can be used to generate a voiceprint, a voice profile linked to their identity. This identity can be used for either speaker identification or speaker verification. Speaker verification uses a 1:1 voiceprint comparison: the voice of the individual speaking is compared to the pre-recorded voiceprint of the individual they claim to be. Voiceprints collected for speaker verification are stored in databases for later comparison.

An audio surveillance, VoIP or phone monitoring technology that picks up a few seconds of speech can transmit the audio to a speaker identification technology. Speaker identification uses a 1 to many comparison: a captured voice is compared to a database of voiceprints– often those collected under the guide of speaker verification– for a match. The effectiveness of speaker identification systems relies on the number of voiceprints available for comparison. This often motivates the widespread collection of voiceprints and leads to their long-term storage.  Speaker identification technologies employ statistical methods to quickly find the corresponding voiceprint and identity. Voice analysis methods are becoming increasingly sophisticated and can now isolate individual voices in noisy environments and use speech analysis to predict the language, gender and even stress levels of individuals. Voice analysis technologies, particularly those focused on behavioural analysis, are prone to error. When measuring supposed truthfulness or stress levels these voice-based technologies have been shown to generate false positives, false negatives and have an accuracy not exceeding that of chance. It is imperative that the limitations of these technologies are fully understood before implementation and the results are not interpreted as absolutes. 

Facial recognition technologies use unique facial properties to identify an individual. Any video or picture of a face acquired from video surveillance or social media analysis can be sent to facial recognition technologies for storage in a database and processing. Once received, the technology finds a match by employing powerful computational methods that extract key facial data from the image or video. Feature-based approaches map distinctive facial features (eyes, nose, forehead, chin, lips, shape) and compute their geometric relationships (relative distances, ratios, angles). These relationships are transformed into vectors, which can be visualised as arrows that have both magnitude and direction, and stored in a database. Any subsequent set of vectors that matches elements of the database set to a predetermined accuracy level will return a positive match. 

Other facial recognition technologies use statistical methods such as principal component analysis to increase computational speed. Multiple facial images can be compiled together and used to generate a composite facial portrait. Components of these composite images– the basic features that make up the average human face– are called eigenfaces. When a face is captured in a picture or video, the stored eigenfaces can be combined with appropriate weights to form the captured face. A match is found if this can be done with a certain level of accuracy. Similarly this method is used to detect faces in crowded images; any object that can be reconstructed sufficiently from an eigenface is likely a face. Facial recognition systems deployed on CCTV networks are capable of detecting faces, tracking faces and recognising an individual face. The speed and accuracy of these computations are improving rapidly: pattern recognition, machine learning, skin texture analysis and technologies such as 3D cameras that are not susceptible to variations in light are being developed and deployed. Companies often mention these features to support their claim that the technology is an absolute indicator of identity and consequently the technology is implemented as such. Facial recognition and other biometric indicators are still fallible and all limitations must be identified prior to use. 

Biometric databases compile and link multiple biometric identifiers. Although some databases can be used for legitimate purposes, there are many risks associated with storing the very information that an individual’s identity is in part composed of. The misappropriation of this information can deny individuals their identity and lead to limits on personal freedom. In many countries strong data protection infrastructure does not exist and as a result deeply personal information has been repeatedly leaked. Additionally, biometric data retention laws often do not specify the maximum storage length, further increasing the risk of database leaks and introducing new dangers. The greatest of which is perhaps scope creep: seemingly benign biometric data stored in databases can later pose significant threats to civil liberties. Images stored by facial recognition technologies can identify different races. These applications raise concerns about discrimination, particularly in environments prone to social sorting.