Group

Acoustics / Feed

Acoustic monitoring is one of our biggest and most active groups, with members collecting, analysing, and interpreting acoustic data from across species, ecosystems, and applications, from animal vocalizations to sounds from our natural and built environment

discussion

Advice needed for accessible acoustic monitoring 

I am working on updating the Forest Integrity Assessment (FIA) tool https://www.hcvnetwork.org/library/hcv-screening-summary that helps managers (and other non-experts) monitor...

5 0

Thanks for your responses Christos and Carly

The FIA is used by forestry managers, NGOs, communities, indigenous groups and other non-experts to measure a proxy for biodiversity in the forests that they manage and provide information of management. The forest composition and structure as microhabitats in the forest are most correlated with overall biodiversity (fungi, plants, animal). Adding a measure has been used as an education component thus far, but improving on this would potentially increase users awareness of key vocal species. Anything that HCVN recommends for development must be accessible, simple to low-cost. Acoustic sensors and data processing with BirdNET or similar would potentially be inaccessible for some users communities so I was enquiring within the Wildlabs community on suitable approaches that could provide a proxy measure of biodiversity for better forest management. 

Happy to jump on a call to explain further (based in UK)

Cheers 

Hi Mona. Yes, this is what we certainly what we are considering, but how to measure these indices by the hands of non-experts in the field is the advice that I need from the WL network.  Cheers

See full post
discussion

Need advice for running BirdNET on big data

I have some 25,000 hours of acoustic recordings to process via BirdNET analyzer, most of it in 15 second chunks. I ran an initial ~4,000 hours, which took a few weeks running in...

2 0

I haven't tried BirdNET analyzer, but with regards to running any bigdata/ML processing, my advice would be to look at something like Google Colab instead of your own laptop. 
 

Hope this helps.

Would that be able to process locally stored acoustic data? 

 

One of the great things about birdnet analyzer is that it is local - it doesn't require uploading terabytes of data into the cloud, which would be expensive, take forever, and likely have some transfer errors in areas with poor internet connection (like the tropics where I do my research). 

See full post
discussion

Prospective NSF INTERN 

Hello all,My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning...

1 2

My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning powered passive acoustic monitoring focusing on wild Bornean orangutans (and other primates). I am reaching out because as a student with a National Science Foundation Graduate Research Fellowship, I am eligible to take advantage of the NSF INTERN program which supports students to engage in non-academic internships through covering a stipend and other expenses, with the only caveat being that the internship must be in-person and not remote. I was wondering if any organizations in conservation technology would be interested in a full-time intern that would be coming in with their own funding? 

In addition to experience with machine learning and acoustics through training a convolutional neural network for my research, I also have worked with GIS, remote sensing, and animal movement data through other projects. Further, I have experience in community outreach both in and outside of academic settings, as I previously worked for the Essex County Department of Parks and Recreation in New Jersey for 3 years where I created interpretive signs, exhibits, newsletters, brochures, and social media posts. Now while doing my fieldwork in Indonesia, I have led hands-on trainings in passive acoustic monitoring placement and analysis as well as given talks and presentations at local high schools and universities. 

I would love to be able to use this opportunity (while the funding still exists, which is uncertain moving forward due to the current political climate in the US) to exercise and develop my skills at a non-academic institution in the conservation technology sphere! If anyone has any suggestions or is part of an organization that would be interested in having me as an intern, please contact me here or via my email: fshort@bu.edu geometry dash. Thank you!

Hi Frank, your work sounds incredibly valuable and well-aligned with current needs in conservation tech. With your strong background in machine learning, acoustics, GIS, and outreach, you’d be an asset to many organizations. I’d recommend looking into groups like Rainforest Connection, Wildlife Acoustics, or the Conservation Tech Directory (by WILDLABS)—they often work on acoustic monitoring and might be open to in-person internships, especially with funding already in place. Best of luck finding the right match—your initiative is impressive!

See full post
event

Submit Living Data Abstracts By 25 May for Listen to the Future: Mobilizing Bioacoustic Data to Meet Conservation Goals

At Living Data 2025, our session will explore the future of bioacoustics through the lens of a global horizon scan—highlighting key priorities for the next two decades. We're now inviting abstracts that showcase tools,...

1 0
Abstract submission has been extended until May 25, 23:59 UTC-5
See full post
discussion

'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes

Thanks to WILDLABS 'Boring Fund' support, we are hosting a workshop on AI for biodiversity monitoring in Medellin, Colombia, April 21st to 24th. This is a followup discussion to...

4 14

Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.  

Hi everyone, following up here with a summary of our workshop!

The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador.  Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.

Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.

The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.

We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.  

Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy.  All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching. 

Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.

Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.

  • The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
  • Keep the objectives, number of packages, and size of sample datasets to a minimum.
  • Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.

I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.    

See full post
discussion

Remote Bat detector

Hi all,What devices are there in the market capable of recording bats but that can be remotely connected?I am used to work with audiomoths or SM4. But I wonder if there are any...

7 0

You will likely need to have an edge-ML set-up where a model is running in real-time and then just sending detections. Birdweather PUC and haikubox do this running birdnet on the edge, and send over bluetooth/wifi/sms- but you'd have to have these networks already set up in places you want to deploy a device. Which limits deployments in many areas where there is no connectivity

So we are building some Bugg devices on licence from Imperial College, and I am making a mod to change the mic out for a version of out Demeter microphone. This should mean we can use it with something like BattyBirdNet, as its powered by a CM4. Happy to have a chat if you are interested. Else it will likely need a custom solution, which can be quite expensive!

There are a lot of parameters in principle here. The size of the battery. How much time in the field is acceptable before a visit? Once a week? Once a month? How many devices you need? How small does the device and battery have to be etc. I use vpns over 4G USB sticks.

I ask because in principle I've build devices that can retrieve files remotely and record ultrasonic. Though the microphones I tested with (Peterssen) are around 300 euros in price. But I've been able to record USB frequencies and also I believe it will be possible to do tdoa sound localization with the output files if they can be heard on 4x recorders.

But we make commercial devices and such a thing would be a custom build. But it would be nice to know what the demand for such a device is to know at which point it becomes interesting.

See full post
discussion

Reducing wind noise in AudioMoth recordings

Hi everyone. I'm wondering if anyone has tips for reducing wind noise in AudioMoth recordings. Our study sites are open paddocks and can be subject to high wind. Many...

6 0

Just following up on this, we are suffering from excessive wind noise in our recordings. We have bought some dead cats, but our audiomoths are in the latest Dev case (https://www.openacousticdevices.info/audiomoth).

In your collective experience, would anyone recommend trying to position the dead cat over the microphone on the Audiomoth itself, or covering the entry port into the device, from either the inside or the outside?

 

Cheers,

Tom

Hi Tom! I think the furry windjammer must be outside the casing to have the desired effect. It can be a bit tricky having this nice furry material that birds and other critters might be attracted to. It may be possible to make an outer "wire cage" to protect the wind jammer. We once had to do this to protect a DIY AudioMoth case against foxes wanting to bite the case (no wind jammer then). You may however create new wind issues with wind noise created by this cage... No-one said it had to be simple! 

See full post
discussion

Experience with AudioMoth Dev for Acoustic Monitoring & Localization?

Hi everyone,I am Dana, a PhD student. I’m planning to use AudioMoth Dev recorders for a passive acoustic monitoring project that involves localizing sound...

14 0

Hi Walter,

Thanks for your reply! It looks like the experiments found very minor time offsets, which is encouraging. Could you clarify what you mean by a "similar field setup"?

In my project, I plan to monitor free-ranging animals — meaning moving subjects — over an area of several square kilometers, so the conditions won't be exactly similar to the experimental setup described.

Given that, would you recommend using any additional tools or strategies to improve synchronization or localization accuracy?

Hi Ryan,

Thanks for your reply! I'm glad to hear that the AudioMoth Dev units are considered powerful.

Have you ever tried applying multilateration to recordings made with them? I would love to know how well they perform in that context.
 

On a more technical note, do you know if lithium batteries (such as 3.7V LiPo) can provide a reliable power supply for Dev units in high temperature environments (around 30–50°C)?

Thanks, 
Dana

Hi Lana,

"similar field setup" means that the vocalizing animal should be surrounded by the recorders and you should have at least 4 audiomoths recording the same sound, then the localization maths is easy (in the end it is a single line of code). With 3 recorders that receive the sound localization is still possible but a little bit more complicated. With 2 recorders you get only some directions (with lift-right ambiguity).

Given the range of movements and assuming that you do not have a huge quantity of recorders do 'fence' the animals, I would approach the tracking slightly different. I would place the Audiomoth in pairs using a single GPS receiver powered by one recorder but connect the PPS wire also to the other recorder. Both recorders separated  by up to 1 m are facing the area of interest. For the analysis, I would then use each pair of recorders to estimate the angle to the animal. If you have the the same sound on two locations, you will have 2 directions, which will give you the desired location. The timings at the different GPS locations may result in timing errors, but each direction is based on the same clock and the GPS timing errors are not relevant anymore. It you add a second microphone to the Audiomoths you can improve the direction further.  If you need more specific info or char about details (that is not of general interest) you can pm me.

See full post
Link

Overview of terrestrial audio recorders with associated metadata info

This paper includes, in its supplementary materials, a helpful table comparing several acoustic recorders used in terrestrial environments, along with associated audio and metadata information.

1
discussion

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings...

20 8

Hello!

I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?

 

s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')

Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')

full_time = Sxx.shape[1] # number of time samples in spectrogram

j_count = 5 # number of chunks we want to break it into

window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk

ACI_tot = 0

for i in range(j_count):

_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])

ACI_tot = ACI_tot + int(ACI)

This gives ACI_tot = 1516

Hi all, 

I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...

Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.

I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.

Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.

Any advice would be greatly appreciated, thanks!

Kind regards,

Jonathan

 

New stable release : v1.5.1

We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.

In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.



 

See full post
discussion

The boring fund: Standardizing Passive Acoustic Monitoring (PAM) data - Safe & sound

Thanks to the Boring Fund, we are developing a common standard for Passive Acoustic Monitoring (PAM) data.Why it’s important: PAM is rapidly growing, but a core bottleneck is the...

7 13

This is such an important project! I can't wait to hear about the results. 

Hey Sanne, awesome - we definitely need a consistent metadata standard for PAM.

If you haven't already, I would suggest sharing this on the Conservation Bioacoustics Slack channel and the AI for Conservation Slack channel. You would reach a lot of active users of PAM, including some folks who have worked on similar metadata efforts. 

If you're not a member of either one of those, DM me your preferred email address and I'll send you an invite!

Hello everyone,

Thank you all for your contribution!

You can read some updates about this project in this post.

Julia

See full post
discussion

Field-Ready Bioacoustics System in Field Testing 

Hi all — I’m Travis, an automation engineer and conservation tech builder currently testing a system I’ve developed called Orpheus: a fully integrated, field-...

4 2

Hi Carly,

Thanks so much for your thoughtful message—and for introducing me to Freaklabs! BoomBox looks awesome, and it’s exciting to see how closely our goals align. There’s definitely potential for collaboration, and I’d be happy to chat more. Their system is super efficient and I think both of our systems have a place in this space. 

Affordability and reliability were key considerations when I started building Orpheus. I wanted to create something rugged enough to survive in the field year-round while still being accessible for conservationists with limited budgets. The full-featured unit is €1500, and the basic model is €800. That pricing reflects both the hardware and the considerable time I’ve spent writing and refining the system—it’s all about balancing performance, durability, and keeping it sustainable for the long term.

Even the base unit is more than just a playback device. It logs every playback event, duration, and species, with enough onboard storage for two years of data, and it automatically converts the logs to line protocol for easy integration into platforms like InfluxDB.

On top of that, Orpheus actively logs and graphs temperature, humidity, atmospheric pressure, and battery voltage. During deep sleep, it interpolates the environmental data to preserve meaningful trends without wasting energy. You can view any of these on it's 5" touch screen or view it in the cross-platform app that will support both Android and IOS once I'm done programming it.

As for audio specs:

  • Recording is supported up to 96kHz,
  • Playback is full 24-bit, both MP3 and WAV formats
  • The system currently supports recording audio clips, reviewing them, and even adding those clips directly to playlists on the device.

That said, for bat research, I know ultrasonic capability is essential. While the current hardware doesn’t capture over 100kHz, I’ve already done the research and identified alternative audio interfaces that would support that range. If that’s a need researchers are interested in, I’d be open to building out a dedicated version to meet those requirements.

Power-wise, it runs indefinitely on solar, even under partly cloudy conditions. It uses a LiFePO₄ battery, and depending on usage, it can operate for up to two weeks on battery alone. It also supports external power from 12V or 24V systems, and solar input from 12V to 70V, so it’s pretty adaptable to various field setups. it also can operate from -5 to 70C (still testing that), but the hardware should be capable according to specs. Your correct though in places like the rain forest that could be challenging and an alternative would need to be considered. 

The software is written modularly to allow for expansion based on user needs. For instance, I’ve already integrated support for a rain sensor that can pause playback if the user chooses that, and could easily include PIR, microwave, or other sensors for more specialized triggers.

Regarding durability, I’m currently testing mesh cable sheathing to deter rodents and other wildlife from chewing the wires—this was a concern raised by one of the teams using Orpheus, and I’m designing around it.

Also, Orpheus includes a seasonal scheduling engine—you can define your own seasons (like Migration, Breeding, etc.) and assign unique playback playlists to each. The device uses astronomical data (sunrise/sunset) based on your provided lat/lon and time zone, and automatically adjusts timing offsets like “1 hour before sunrise.” The goal is truly fire-and-forget deployment.

I'm open to adding any features or sensors that might be useful within reason.

I’m curious though, what specs would make a recording device for bats an indispensable tool? What features don’t already exist  on the market that should?


 

Warm regards,
Travis

I love the look of the system! We almost called our new sensor Orpheus, but decided against it as there is already a microphone named that! I'd love to see a bit more about the technical implementation! Is this running off of a CM5 or something different? 

Hi Ryan, hmm, I had no idea there was a microphone named that. I thought about how it’s used to lure birds for netting, and I like Greek Mythology. I thought it was a perfect fit, but hmm, May have to change the name. I considered using a CM, but i wanted the system to be as efficient as possible. I am using a RPI Zero 2 W with emmc. To ensure the UI stays  responsive I used some backend tricks like thread pooling. It works well and resources stay in check. The challenging part is ensuring thread handling is done gracefully and carefully to prevent race conditions. What sort of sensor have you been developing?

See full post
article

Nature Tech for Biodiversity Sector Map launched!

Carly Batist and 1 more
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective! 

1 0
Thanks for sharing @carlybatist  and @aliburchard !About the first point, lack of data integration and interpretation will be a bottleneck, if not death blow to the whole...
See full post
discussion

WILDLABS AWARDS 2024 - BumbleBuzz: automatic recognition of bumblebee species and behaviour from their buzzing sounds 

The 'BumbleBuzz' team (@JeremyFroidevaux, @DarrylCox, @RichardComont, @TBFBumblebee, @KJPark, @yvesbas, @ilyassmoummad, @nicofarr) is very pleased to have been awarded the...

3 5

Super great to see that there will be more work on insect ecoacoustics! So prevalent in practically every soundscape, but so often over-looked. Can't wait to follow this project as it develops!

Really looking forward to following this project. I'm very curious how you'll be able to tease out different species, particularly among species that feature a variety of worker sizes. 

See full post
discussion

Nature Tech Unconference - Anyone attending?

Hi all, anyone planning to attend the Nature Tech Unconference on 28th March at the London School of Economics Campus in London, UK? (the event is free to attend but...

8 1

The Futures Wild team will be there :)

See full post
Link

InsectSet459: an open dataset of insect sounds for bioacoustic machine learning

InsectSet459 - the first large-scale open dataset of insect sounds, featuring 26,399 audio clips from 459 species of Orthoptera and Cicadidae.

0
discussion

Transfer learning with BirdNet for avian and non-avian detections

Who here has trained BirdNet to enable sound detection of other avian and non-avian species? I'd love to hear from you and would be grateful if you could share details about your...

13 1

Hi ,

We have been working on creating custom classifiers for frogs in India and have moderate success in this effort. Our workflow is simple - created around 100 3s snippets for each species class (tip: make sure you include only species specific calls obtained from the same PAM data and ensure a decent mix of good and medium quality sound files) in Raven Pro and ran the custom classifier in BirdNET GUI with the default settings. This gives out decent output - although the 'cocktail party problem' is a big issue in species rich regions like the tropics. Species that tend to call in chorus along with many other frog species are hard to be trained. We are currently looking out for solutions to address this. Would like to know how others are dealing with this issue as well.

Thanks

Hi, ya'll!  Yes, we've found that building custom classifiers on top of bird classifier embeddings (including BirdNET) often works very well!

We originally reported good performance on (frequency shifted) bats, anurans, and marine mammals. We've repeatedly seen good performance on real-world passive acoustic data, as well.

Alade Allen-Ankins recently has a paper applying birdnet embeddings to Australian anurans.

Kath, et al also recently used transfer learning from BirdNet embeddings to obtain a large improvement on Anuraset's baseline score. (F1-macro score improved from 0.378 in the original paper to 0.588 using transfer from bird embeddings.)

Our group should also have a new paper up shortly with practical suggestions for improving transfer learning even further. At the very least, transfer learning should provide a much faster way to collect and curate training data than direct human annotation. I highly recommend it as a first step on new problems: You will likely get a good-enough classifier for your needs very quickly, and if not, you'll produce a very useful dataset for further work much more quickly than direct annotation would allow.

Hi Danielle, 

A friend of mine was having a fellowship at K Lisa Yang Center and she successfully built a classifier for Javan Slow Loris. So far I only heard she only made one for sonic call, not sure if it is possible for ultrasonics.

See full post