Research update: developing new bat monitoring tools with Bat Detective data
Hello from the Bat Detective team! It’s been a busy summer at Bat Detective HQ after an amazing spring with British Science Week and the World Tour. So we’ve been a bit quiet on the blog in the last couple of months while we’ve been working on updating our automated tools and road-testing them on new data. We’re now coming close to having our results ready for scientific publication, and we’ve also had the chance to put our new software tools into practice on analysing some brand new bat survey data. So over the next two blog posts, we’ll be updating you on our progress, explaining where the Bat Detective project is at right now, and showing how we’ve been using all the data you’ve helped us to label.
In this first post we’ll discuss how we’ve used Bat Detective data to improve our automated bat call detection tools, and highlight some of the challenges we’ve encountered along the way. In the next post, we’ll show some examples of where we’ve been testing out our software tools on bat survey data from the UK and Madeira – keep an eye on the blog for that one very soon.
The team have also been up to a few other things in the last few months. Bat Detective’s Rory Gibb (me) gave a project update talk at the Zooniverse’s first ever ecology workshop, where there was some fascinating discussion about how citizen scientists can become increasingly involved in some of the major challenges facing ecology and conservation in future. We’ve also just had a big article on Bat Detective and its sister project iBats published in the latest citizen science-themed issue of Environmental SCIENTIST – the article will be available to read online in the near future, so we’ll share it here when that happens.
Training machines to recognise bat calls: why and how?
In our last research update post a year ago, we explained how advances in machine learning technology have enabled us to train algorithms to automatically recognise bat calls in ultrasonic survey recordings. This is important because newer bat detectors can be deployed in the field for weeks or months, collecting so much audio data that it’s almost impossible to analyse them manually. By making it possible for bat researchers to quickly and reliably find where bat echolocation calls are in these recordings, automated tools are creating exciting new opportunities to study bat ecology, behaviour and conservation at much larger scales than ever before.
Machine learning involves training computer algorithms to automatically recognise bat echolocation calls in recordings, by showing the computer thousands of examples of what they look and sound like. In our last research update we showed how training the algorithms on increasingly large amounts of data from Bat Detective improves their performance. For that reason, and also to include a greater diversity of bat sounds from around the globe, we’ve asked for your help in labelling our World Tour data over the last year. And thanks also to the efforts put in by volunteers during British Science Week, we’ve now got thousands of new bat call annotations to incorporate into our detector tools – so one of our current challenges is exploring the best ways to use all of these new data.
We’ve now got the detector algorithms up and running, and we’re currently testing them out to assess how well they perform. The figure below shows an example of the detector in action on a snippet of audio data from the iBats global bat monitoring programme. The recording is displayed as a spectrogram underneath, with sounds showing up as bright markings. The graph above shows the computer predicting where it thinks the bat calls are – each vertical red line shows where the computer predicts there is a bat call. The height of the red lines tell us how certain the computer is about its predictions, where higher indicates more confident. The green bars show where a human expert has confirmed that bat calls are present – so in this example, the computer has successfully recognised all the bat calls.
Are you sure that’s a bat? The problem of false positives
However, there are still some errors where the computer thinks there is a bat call, when there actually isn’t one (a ‘false positive’). This is a problem for monitoring bat populations, because too many false positives could result in researchers overestimating the true number of bats in an area, which could for example have an impact on conservation efforts. You can see a clear example of these errors in the next figure below, where the computer falsely predicts that the mechanical noises at the bottom of the spectrogram are lots of bat calls.
So to improve this, we’ve also been including non-bat sounds from Bat Detective – those insect calls and mechanical noises you’ve also helped us to find. By training the algorithms to also recognise what bat calls don’t look like, we can significantly improve their accuracy. The image below shows the difference: it’s the same audio clip, but there are now far fewer false positives (red lines).
This is a great example of the importance of testing out these tools on new data from a variety of times, places and detector types. This helps us get a better idea of where they’re under-performing, and how they can be improved before we release them as open-source tools for other researchers to use. So with that in mind, keep an eye on the Bat Detective blog next week for our next research update: we’ll be showing some examples of where we’ve road-tested them on new bat survey data recorded during this summer. We’ll also be uploading a new set of data from Russia – one of our last few World Tour stops – so stay tuned for that.
And a huge thanks again for all your efforts with labeling the data on Bat Detective, both during this year and throughout the whole project – we wouldn’t have been able to get to this stage without your input, and it’s really exciting to see the work of our community of volunteers starting to produce results.
Trackbacks / Pingbacks