Finding the WHO, WHAT & WHEN in audio
Oxford Wave Research (OWR) is a leading R&D company based in Oxford, UK, specialising in audio and speech processing, voice biometrics and deep learning-related product development. Our team has many years of experience developing solutions for law enforcement, military as well as other agencies both in the UK and around the world.
Dr Amelia Gully joins Oxford Wave Research team!Oxford Wave Research are pleased to announce the appointment of Dr Amelia Gully as a Senior Research Scientist. Amelia joins us from the University of York forensic speech science group, where she remains a research associate.
"I am delighted to be joining the team at Oxford Wave Research, where I can put my acoustics and signal processing experience to work addressing real problems for customers, and contribute to exciting technological developments in the field of audio forensics." Dr Amelia GullyAmelia's research to date has focused on the anatomical bases of speaker identity, and particularly how individual differences in vocal tract shape affect the speech signal. For this work she was awarded a British Academy Postdoctoral Fellowship. She holds a PhD in Electronic Engineering and an MSc in Digital Signal Processing, both from the University of York, as well as a BSc (Hons) in Audio Technology from the University of Salford.
"I am excited to welcome Amelia to OWR - with her expertise in acoustics and signal processing, and enthusiasm for all-things audio, she will be a valuable addition to the research team!" Dr Finnian Kelly, Principal Research ScientistAmelia joins us remotely from York where she lives with her partner and two rescue dogs. When not engaged in audio and speech research, she can be found playing video games or pottering around on her allotment.
IAFPA Research Grants 🧐— IAFPA (@IAFPAtweets) November 16, 2022
🚨 Deadline extension! 🚨
We invite IAFPA members to apply for small research grants (max. 1,500 GBP) for studies in all areas of forensic phonetics and acoustics. Application deadline is now Wed 30th Nov!
ℹ️ More info: https://t.co/3XXEEnu8gZ
Oxford Wave Research collaboration with the University of York and the Netherlands Forensic Institute in £1m ESRC Project
Oxford Wave Research are delighted to be collaborators with the University of York and the Netherlands Forensic Institute in a recently awarded ESRC-funded project (£1,012,570) 'Person-specific automatic speaker recognition: understanding the behaviour of individuals for applications of ASR' (ES/W001241/1). This is a three year project running from 2022 to 2025 led by Dr Vincent Hughes (PI), Professor Paul Foulkes (CI) and Dr Philip Harrison in the Department of Language and Linguistic Science at the University of York. The project is due to start in summer 2022 and will run for 3 years. OWR will be providing our expertise and consultancy in automatic speaker recognition and our flagship VOCALISE forensic speaker recognition system.
Automatic speaker recognition (ASR) software processes and analyses speech to inform decisions about whether two voices belong to the same or different individuals. Such technology is becoming an increasingly important part of our lives; used as a security measure when gaining access to personal accounts (e.g. banks), or as a means of tailoring content to a specific person on smart devices. Around the world, such systems are commonly used for investigative and forensic purposes, to analyse recordings of criminal voices where identity is unknown.
The aim of this project is to systematically analyse the factors that make individuals easy or difficult to recognise within automatic speaker recognition systems. By understanding these factors, we can better predict which speakers are likely to be problematic, tailor systems to those individuals, and ultimately improve overall accuracy and performance. The project will use innovative methods and large-scale data, uniting expertise from linguistics, speech technology, and forensic speech analysis, from the academic, professional, and commercial sectors. This has been made possible via the University of York’s strong collaboration with Oxford Wave Research and two project partners including the Netherlands Forensic Institute (NFI).
The University of York and OWR teams are looking forward to a very fruitful collaboration that will undoubtedly further the state of the art in forensic speaker recognition.
Dr Vincent Hughes, Principal Investigator, University of York says “We are delighted to be working so closely on this project with Oxford Wave Research, who are world leaders in the field of automatic speaker recognition and speech technology. We hope that our research will deliver major benefits to the fields of speaker recognition and forensic speech science”.
Dr Anil Alexander, CEO, Oxford Wave Research says “Our team led by Dr Finnian Kelly is thrilled to contribute to this in-depth study of the individual specific-factors affecting speaker recognition, with the accomplished research team led by Dr Hughes from the University of York who are at the forefront of this space, and real-word end-users like the Netherlands Forensic Institute who have been driving research and innovation in this space for many years”.
The OWR team had a wonderful time at the TAPAS Final Workshop in Antwerp. It was great catching up with @TherapyBox and @PhilipsResearch. A special thanks to Mathew from @Idiap_ch for organising what has been such a rewarding project for us.@BenceHalpern @TAPAS_H2020_ITN pic.twitter.com/SFblboAE9a— Oxford Wave Research (@OxfordWave) September 15, 2022
Gamers use SpectrumView to uncover Fortnite and Minecraft's secrets
Gamers use SpectrumView to uncover Fortnite and Minecraft's secrets
Content creators of all kinds, such as the musician Aphex Twin, have long used hidden secret patterns and text in the audio that can be observed in their spectrograms. More recently, video game developers have hidden Easter eggs in the spectrograms of their game soundtracks for their more inquisitive players to find. For example, among Minecraft's sound effects, the face of a Creeper, one of the game's enemies, can be seen in the spectrogram of the audio heard in a cave, as SpectrumView user “Musix200” discovered. See if you can spot it too!
Taking this idea a step further, alternate reality games (ARGs) are a modern spin on the traditional scavenger hunt in which participants scour websites, social media, and videos looking for clues. These games have taken social media sites like YouTube and Reddit by storm over the last few years. Organisers, often video game developers, will bury information in all sorts of places, like images, website code, and audio. Whole communities have been formed in order to find out secret stories and previews for their favourite video game, or just to have some cooperative fun while solving a digital mystery.
Epic Games created ARG content in the run-up to the Season 5 release of their famous multiplayer online game, Fortnite Battle Royale. They staged a rocket launch within the video game itself, during which some of the audio played was slightly odd. Gamers quickly realised that there was probably more to the audio clip than what could be just heard. Looking for patterns within the audio led them to visualising the frequencies in the audio in a spectrogram. One such example of using SpectrumView to analyse the audio clip by player “Rockin Thomas86” is shown in the video below.[embed]https://www.youtube.com/watch?v=XggS15GxHr4[/embed]
On the spectrogram, you can see pixelated skulls at the start and end of the audio, and, in the middle, a list of letters and numbers. According to the Game Detectives Wiki, the skull shapes were shown on television screens within the game before the rocket launch, while the letters and numbers could be decoded as ASCII values to produce in-game coordinates. Some time after the rocket launch, dimensional rifts opened up at these coordinates, causing locations to appear and disappear on the game map. Players were primed to check the locations, having teased out the message hidden in the rocket launch audio.
Spectrum analysers like our iOS app SpectrumView can open up a whole new dimension of information in audio, and we are excited to see what more our users can find hidden away in the audio of all sorts of ARG content.
SpectrumView 2.4.1 Update
SpectrumView 2.4.1 has arrived!
SpectrumView and SpectrumView Plus 2.4.1 have been released today, providing a range of bug fixes and ensuring complete compatibility with new devices and iOS 15! This free update can be downloaded from the App Store at the links below, or will have already installed if you have automatic updates turned on in Settings.
SpectrumView Plus: https://apps.apple.com/gb/app/spectrumview-plus/id571455198
Getting ready to board the trains for the final workshop of the TAPAS #Horizon2020 project marking the end of long fantastic journey from 2017. It's been a very mutually fulfilling collaboration between industry and academia! Thanks to @MSCActions and @EU_H2020. https://t.co/gsR8OG1sZK— Oxford Wave Research (@OxfordWave) September 7, 2022
OWR at IAFPA 2021
We're attending IAFPA 2021!
Team work makes dream work!
Oxford Wave Research staff are very excited to be attending the upcoming virtual IAFPA Conference, organised this year by Philipps-Universität Marburg. We are delighted to have a number of papers representing the results of our latest research in the field of voice biometrics and audio processing, accepted for presentation at the conference Just to give a sneak peek of what you will be seeing, here is a list of the presentations co-authored by the OWR researchers in collaboration with distinguished academicians and forensic scientists:
- “A WYRED connection: x-vectors and forensic speech data” by Anil Alexander, Finnian Kelly and Erica Gold
- “How does the perceptual similarity of the relevant population to a questioned speaker affect the likelihood ratio?” by Linda Gerlach, Tom Coy, Finnian Kelly, Kirsty McDougall and Anil Alexander
- “How do Automatic Speaker Recognition systems 'perceive' voice similarity? Further exploration of the relationship between human and machine voice similarity ratings.” by Linda Gerlach, Kirsty McDougall, Finnian Kelly and Anil Alexander
- “Speaker-informed speech enhancement and separation” by Bence Mark Halpern, Finnian Kelly, and Anil Alexander
- “Exploring the impact of face coverings on x-vector speaker recognition using VOCALISE” by Tom Iszatt, Ekrem Malkoc, Finnian Kelly, and Anil Alexander