Bio-Metrics is an easy-to-use software solution for calculating and visualising the performance of speaker (and other biometric) recognition systems or algorithms.
Quickly calculate error metrics and visualise performance with DET, Tippett, and Zoo plots.
Calculate likelihood ratios (LRs) in a graphical and interactive way.
Export results and graphics to Microsoft Word, Powerpoint, or other software, for easy inclusion in reports or scientific papers.
Runs on the Microsoft Windows platform.
The data browser displays the data loaded for analysis. Set the wildcard to discriminate between matches and non-matches (i.e. comparisons of the same and different individuals) based on the filenames, and see the changes reflected immediately.
The Scatter plot provides a quick visualisation of the individual data points loaded for analysis; this is helpful for observing general trends and for spotting any erroneous data points.
Equal Error Graph
The Equal Error Graph plots the false acceptance rate (or false match rate) and false rejection rate (or false non-match rate) on the vertical axis against the score threshold on the horizontal axis. The point of intersection of these two curves (when the false acceptance rate is equal to the false rejection rate) is the Equal Error Rate (EER).
Detection Error Trade-off (DET) plot
The DET plot shows the false acceptance rate plotted against the false rejection rate for a range of score thresholds, with linear or logarithmic axis scaling. The EER is also estimated and displayed on the plot.
Likelihood Ratio (LR) plot
The LR plot estimates the probability density functions (PDFs) of scores resulting from the comparison of features from the same individual and the comparison of features from different individuals. Selecting a score value on the graph will calculate the likelihood ratio of the score belonging to the same or different sources. Note that if the provided scores consist of those resulting from the comparison of the features from a single individual, an explicit likelihood ratio based on the within-source and between-source variability can be calculated.
The Zoo plot shows the performance of individuals, or groups of individuals (e.g. males and females) within the biometric recognition system. The Zoo plot in Bio-Metrics (Alexander et al., 2014) is based on the Zoo plot analysis developed by Yager and Dunstone (2011), which is an extension of George Doddington’s biometric classification (1998).
Receiver Operating Characteristic (ROC) plot
The ROC plot shows the true acceptance rate (TAR) on the vertical axis plotted against the false acceptance rate (FAR) on the horizontal axis, for a range of score thresholds. Either linear or logarithmic axis scaling can be selected. The TAR is the proportion of genuine matches correctly accepted at a given threshold (the TAR is equal to 100-FAR).
The Tippett plot is a cumulative probability distribution plot expressing the proportion of likelihood ratios (LRs) greater than a given value, i.e. P(LR(H) > LR), for cases corresponding to the H0 hypothesis (biometric samples are from the same source) and the H1 hypothesis (biometric samples are from different sources). The separation between the curves corresponding to each hypothesis is an indication of the performance of the system or algorithm, with larger separation implying better performance than smaller separation.
In order to directly interpret and compare biometric recognition scores from different systems (or from the same system under different conditions), it is necessary to calibrate the scores. Score calibration transforms the scores, bringing them into a comparable numerical range, where positive scores generally indicate a match, and negative scores generally indicate a non-match.
Bio-Metrics provides a powerful score calibration capability based on logistic regression which can be applied in two ways:
- A calibration function is learned from one data series and applied to another data series.
- Calibration is learned from and applied to the same data series using a cross-validation approach.
Fusion is the process of combining biometric recognition scores from multiple systems or algorithms. The aim of fusion is to generate a new set of calibrated scores that improve upon the discrimination performance (e.g. the EER) of any of the individual systems or algorithms.
Bio-Metrics provides a powerful fusion capability based on logistic regression, which can be applied in two ways:
- A fusion function is learned from one set of data series and applied to another set of data series.
- Fusion is learned from and applied to the same set of data series using a cross-validation approach.
Interactive graphing features
Zoom: Get better resolution of any area of the graph of interest by highlighting a section of the chart. Bio-Metrics will then zoom and display that section.
Data cursor: Display horizontal and vertical axis values of data at a certain point by hovering the mouse over the curves in the graph.
Logarithmic scale: Get better resolution at lower values by shifting between a linear and a logarithmic (base 10) display of the values.
Annotations: Toggle annotations, such as likelihood ratio values and equal error rates.
3D: Display graphs in 3D for visualisation and use in presentations and reports.
Modifications: Insert titles, shapes, and text to illustrate your results.
Supported output formats
Bio-Metrics allows the user to export the resulting graphs in either vector or raster image formats. The following output formats are supported and permit easy export into MS Word documents and Powerpoint presentations, as well as other software:
Selected publications using Bio-Metrics
Publications by OWR and some of our users
- David van der Vloed, Finnian Kelly, and Anil Alexander. Exploring the effects of device variability on forensic speaker comparison using VOCALISE and NFI-FRIDA, a forensically realistic database, Odyssey 2020: The Speaker and Language Recognition Workshop, Tokyo, Japan [to appear]. [Download Article]
- Radek Skarnitzl, Maral Asiaee, Mandana Nourbakhsh. Tuning the performance of automatic speaker recognition in different conditions, International Journal of Speech, Language and the Law, 26(2) 2019, pp. 209-229. https://doi.org/10.1558/ijsll.39778
- Finnian Kelly, Andrea Fröhlich, Volker Dellwo, Oscar Forth, Samuel Kent, and Anil Alexander. Evaluation of VOCALISE under conditions reflecting those of a real forensic voice comparison case (forensic_eval01), Speech Communication, vol. 112, pp. 30-36, September 2019. [Download Article]
- Sula Ross, Katherine Earnshaw, and Erica Gold. A Cautionary Tale For Phonetic Analysis: The Variability of Speech Between and Within Recording Sessions, 19th International Congress of the Phonetic Sciences, pp. 3090-3094, Australasian Speech Science and Technology Association Inc., August 2019.
- Finnian Kelly, Oscar Forth, Samuel Kent, Linda Gerlach, and Anil Alexander. Deep neural network based forensic automatic speaker recognition in VOCALISE using x-vectors, Audio Engineering Society (AES) Forensics Conference 2019, Porto, Portugal. [Download Abstract] [Download Presentation]
- Kevin Chan, Andrew Radcliff, Jeffrey Chudik, Katrina Molina, Alex Hirsch, Brennon Morning, Evan Pulliam, and Stephen Elliot. Subject Movement at Different Force Levels in a Fingerprint Recognition System, International Conference on Security and Management (SAM), p.223-229, The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) 2016, Athens, Greece.
- Eugenia San Segundo and Hermann J. Künzel. Automatic speaker recognition of Spanish siblings: (monozygotic and dizygotic) twins and non-twin brothers, Loquens, 2 (2) 2015, p. e021. doi: http://dx.doi.org/10.3989/loquens.2015.021
- Anil Alexander, Oscar Forth, John Nash, and Neil Yager. Zooplots for Speaker Recognition with Tall and Fat Animals, International Association for Forensic Phonetics and Acoustics (IAFPA) conference 2014, Zürich, Switzerland. [Download Abstract] [Download Presentation]
- Hermann J. Künzel. Automatic speaker recognition with crosslanguage speech material, International Journal of Speech, Language & the Law, 20(1) 2013.