Centre for Forensic Phonetics and Acoustics (CFPA) at the University of Zurich partners with Oxford Wave Research

Oxford Wave Research are delighted to be named partners of with the recently opened Centre for Forensic Phonetics and Acoustics (CFPA) at the University of Zurich. Opened on the 6th March 2019 the CFPA brings together research from a range of fields to address all areas of voice recognition in relation to forensic investigation.

Led by Prof. Volker Dellwo this centre will combine world-class research into forensic speaker recognition, voice disguise and voice line-ups with forensic services such as speaker profiling, speaker comparison, transcription, audio authentication and audio enhancement for both prosecution & defence.

As all great collaborations should, this one started with a few nice glasses in 2017 in Zurich.

Link //www.cl.uzh.ch/de/phonetics/CFPA.html (web link)

OWR development team attending TAPAS TE2 Workshop on Speech Processing and Machine Learning

(Image: João Carvalho)

Oscar Forth and  Sam Kent, representing Oxford Wave Research’s R&D and software development team, are pleased to be attending the TAPAS TE2 Workshop on Speech Processing and Machine Learning hosted by INESC, Lisbon, Portugal.

They will be providing a short presentation and further developing our engagement with the TAPAS project.

TAPAS: Training Network on Automatic Processing of PAthological Speech, is an EU funded project looking at, and finding solutions to, the challenges facing people living with debilitating speech pathologies in an era where speech based technologies play an ever increasing role in our day to day lives.

//www.tapas-etn-eu.org/

How fast can a fidget spinner video about frequency analysis go?

Last Friday, when we were just winding up for the end of the week we started getting a large number of messages on our website chat app, and also a a huge spike in the number of hits on our website (1283.05%), and in particular from our audio frequency spectrum analyser called Spectrumview.

Had we been hacked? Had some rivettingly interestingly pictures of the intimate details of audio analysis been unwittingly released on our webpage? Thankfully not. Our app had been used by the extremely talented stand-up comedian and maths communicator Matt Parker (@standupmathsto measure how fast he could get a fidget spinner to go. This video had hundreds of thousands of views each day, and just under 300,000 at the time of writing.

This is a brilliant video that shows how to use Spectrumview to calculate the frequency and thereby the speed of the tips of the fidget spinner. We are delighted to see such a weird and wonderful use for our little app.

 

Before you ask, we don’t have an Android version. There are no plans to have one just yet, but we may be persuaded. If you ask nicely.

The Linguistic Data Consortium and Oxford Wave Research announce a new collaboration

The Linguistic Data Consortium (LDC), USA and Oxford Wave Research (UK) are proud to announce a new collaboration. Oxford Wave Research (OWR) is an audio and speech R&D company based in Oxford, UK that works on audio processing and speaker diarization and recognition. This collaboration encompasses the use of LDC’s speech corpora and OWR’s audio fingerprinting, speaker diarization and recognition software.

In particular, LDC will use OWR’s audio fingerprinting technology as part of the MADCAT software (Multimedia Audio Duplication & Content Analysis Tool) to find repeated content in broadcast data including audio signatures that mark program boundaries.

The OWR Research Director, Dr Anil Alexander says,

“The OWR team is really looking forward to working with LDC on this exciting real-world application of audio fingerprinting of short utterances.”

 

LDC Executive Director Chris Cieri says

“The Consortium continually looks for new ways to integrate speech technology into data collection and annotation processes to improve speed, scale and quality while avoiding bias. We are excited by the increased capability that OWR tools offer.”

 

WatchMeRecord : Simultaneous or separate audio recording for your iPhone and Apple Watch

 

Quick simultaneous or separate audio recordings on your iPhone and Apple Watch

 

WatchMeRecord on the Apple Watch

WatchMeRecord allows you to quickly making audio recordings either using your watch or your phone to start the recording. Recordings can be triggered independently or simultaneously on both devices allowing you to record two events at the same time.

You can discreetly use an Apple Watch as a remote control to start a recording on your iPhone, or equally use your iPhone to start your Apple Watch recording. You can even start them both recording at the same time for two recordings from two separate perspectives.

For instance, your phone could be recording the vocals of your recording session, and your watch could be recording the instrument(s).


 

Features

– Recording and playback capability from Apple Watch and iPhone.
– Ability to record from both your iPhone and Apple Watch at the same time with a single button press
– Playback recordings stored only on your iPhone through the watch.
– Powerful, high-quality transcription of the recordings (in English)
– Easily shareable recordings using messages, email and more
– Shareable transcriptions using messages, email, and more (requires in-app purchase)
– Playback, download, and manage your recordings from a web interface on your computer or another mobile device (requires in-app purchase)
– Ability to start recordings using your iPhone microphone from your Apple Watch, even when the app is closed and your iPhone is locked.
– Rename recordings from both your iPhone and Apple Watch (Watch uses scribble or dictation functionality)
– Full user tutorial on app start-up


Walkthrough