Group

Acoustics / Feed

Acoustic monitoring is one of our biggest and most active groups, with members collecting, analysing, and interpreting acoustic data from across species, ecosystems, and applications, from animal vocalizations to sounds from our natural and built environment

discussion

Safe and Sound: a standard for bioacoustic data

BackgroundThanks to the Boring Fund, we are developing a common standard for Passive Acoustic Monitoring (PAM) data.Why it’s important: PAM is rapidly growing,...

3 7

Fantastic! Can't wait to hear updates. 

See full post
discussion

🐸 WILDLABS Awards 2025: Open-Source Solutions for Amphibian Monitoring: Adapting Autonomous Recording Devices (ARDs) and AI-Based Detection in Patagonia

We’re excited to launch our WILDLABS-funded project to adapt open-source recording hardware and AI tools to help monitor amphibians, with an initial focus on one of South America'...

3 7

Project Update — Sensors, Sounds, and DIY Solutions (sensores, sonidos y soluciones caseras)



We continue making progress on our bioacoustics project focused on the conservation of Patagonian amphibians, thanks to the support of WILDLABS. Here are some of the areas we’ve been working on in recent months:
(Seguimos avanzando en nuestro proyecto de bioacústica aplicada a la conservación de anfibios patagónicos, gracias al apoyo de WildLabs.Queremos compartir algunos de los frentes en los que estuvimos trabajando estos meses)

1. Hardware

One of our main goals was to explore how to improve AudioMoth recorders to capture not only sound but also key environmental variables for amphibian monitoring. We tested an implementation of the I2C protocol using bit banging via the GPIO pins, allowing us to connect external sensors. The modified firmware is already available in our repository:

👉 https://gitlab.com/emiliobascary/audiomoth

We are still working on managing power consumption and integrating specific sensors, but the initial tests have been promising.

(Uno de nuestros principales objetivos fue explorar cómo mejorar las grabadoras AudioMoth para que registren no sólo sonido, sino también variables ambientales clave para el monitoreo de anfibios. Probamos una implementación del protocolo I2C mediante bit banging en los pines GPIO, lo que permite conectar sensores externos. La modificación del firmware ya está disponible en nuestro repositorio:
https://gitlab.com/emiliobascary/audiomoth
Aún estamos trabajando en la gestión del consumo energético y en incorporar sensores específicos, pero los primeros ensayos son alentadores.)



2. Software (AI)

We explored different strategies for automatically detecting vocalizations in complex acoustic landscapes.

BirdNET is by far the most widely used, but we noted that it’s implemented in TensorFlow — a library that is becoming somewhat outdated.

This gave us the opportunity to reimplement it in PyTorch (currently the most widely used and actively maintained deep learning library) and begin pretraining a new model using AnuraSet and our own data. Given the rapid evolution of neural network architectures, we also took the chance to experiment with Transformers — specifically, Whisper and DeltaNet.

Our code and progress will be shared soon on GitHub.

(Exploramos diferentes estrategias para la detección automática de vocalizaciones en paisajes acústicos complejos. La más utilizada por lejos es BirdNet, aunque notamos que está implementado en TensorFlow, una libreria de que está quedando al margen. Aprovechamos la oportunidad para reimplementarla en PyTorch (la librería de deep learning con mayor mantenimiento y más popular hoy en día) y realizar un nuevo pre-entrenamiento basado en AnuraSet y nuestros propios datos. Dado la rápida evolución de las arquitecturas de redes neuronales disponibles, tomamos la oportunidad para implementar y experimentar con Transformers. Más específicamente Whisper y DeltaNet. Nuestro código y avances irán siendo compartidos en GitHub.)

3. Miscellaneous

Alongside hardware and software, we’ve been refining our workflow.

We found interesting points of alignment with the “Safe and Sound: a standard for bioacoustic data” initiative (still in progress), which offers clear guidelines for folder structure and data handling in bioacoustics. This is helping us design protocols that ensure organization, traceability, and future reuse of our data.

We also discussed annotation criteria with @jsulloa to ensure consistent and replicable labeling that supports the training of automatic models.

We're excited to continue sharing experiences with the Latin America Community— we know we share many of the same challenges, but also great potential to apply these technologies to conservation in our region.

(Además del trabajo en hardware y software, estamos afinando nuestro flujo de trabajo. Encontramos puntos de articulación muy interesantes con la iniciativa “Safe and Sound: a standard for bioacoustic data” (todavía en progreso), que ofrece lineamientos claros sobre la estructura de carpetas y el manejo de datos bioacústicos. Esto nos está ayudando a diseñar protocolos que garanticen orden, trazabilidad y reutilización futura de la información. También discutimos criterios de etiquetado con @jsulloa, para lograr anotaciones consistentes y replicables que faciliten el entrenamiento de modelos automáticos. Estamos entusiasmados por seguir compartiendo experiencias con Latin America Community , con quienes sabemos que compartimos muchos desafíos, pero también un enorme potencial para aplicar estas tecnologías a la conservación en nuestra región.)

Love this team <3
We would love to connect with teams also working on the whole AI pipeline- pretraining, finetuning and deployment! Training of the models is in progress, and we know lots can be learned from your experiences!

Also, we are approaching the UI design and development from the software-on-demand philosophy. Why depend on third-party software, having to learn, adapt and comply to their UX and ecosystem? Thanks to agentic AI in our IDEs we can quickly and reliably iterate towards tools designed to satisfy our own specific needs and practices, putting the researcher first. 

Your ideas, thoughts or critiques are very much welcome!

Kudos for such an innovative approach—integrating additional sensors with acoustic recorders is a brilliant step forward! I'm especially interested in how you tackle energy autonomy, which I personally see as the main limitation of Audiomoths  

Looking forward to seeing how your system evolves!

See full post
discussion

Issues with Audiomoth recorders

I am part of a research group using audiomoth recorders to capture bird calls. We have had some issues with the audiomoths and I was wondering if anyone else might be experiencing...

9 0

We don't wrap all the way around, just halfway around sticking to either side of the battery case right next to, but not touching, the circuitboard. If that's not clear I can take a pic :)

See full post
discussion

A technical and conceptual curiosity... Could generative AI help us simulate or decode animal communication?

Hi everyone,I recently watched a talk by Aza Raskin where he discusses the idea of using generative models to explore communication with whales. While the conversation wasn’t...

4 1

Hi Jorge, 

 

I think you'll find this research interesting: https://blog.google/technology/ai/dolphingemma/

Google's researchers did exactly that. They trained an LLM on dolphin vocalizations to produce continuation output, exactly as in the autoregressive papers you've mentioned, VALL-E or WaveNet.

I think they plan to test it in the field this summer and see if it will produce any interesting interaction.

Looking forward to see what they'll find :) 

Besides, two more cool organizations working in the field of language understanding of animals using AI:

https://www.projectceti.org/

https://www.earthspecies.org/

This is a really fascinating concept. I’ve been thinking about similar overlaps between AI and animal communication, especially for conservation applications. Definitely interested in seeing where this kind of work goes.

This is such a compelling direction, especially the idea of linking unsupervised vocalisation clustering to generative models for controlled playback. I haven’t seen much done with SpecGAN or AudioLDM in this space yet, but the potential is huge. Definitely curious how the field might adopt this for species beyond whales. Following this thread closely!

See full post
discussion

Issue with SongBeam recorder

Hello everyone, I am currently working on a project to measure the impact of industrial noise on the biodiversity of a natural reserve in Veracruz, Mexico. I have been...

1 0

Hi Josept! Thank you for sharing your experience! This types of feedback are important for the community to know about when choosing what tech to use for their work. Would you be interested in sharing a review of Songbeam and the Audiomoth on The Inventory, our wiki-style database of conservation tech tools, R&D projects, and organizations? You can learn more here about how to leave reviews!

See full post
discussion

Analyzing Bird Song

When I go out and work on building trails in my woodlot here in New Brunswick Canada, I usually have my phone hanging from a branch with Merlin running. Amazing the number of...

2 0

Hi John,

You can try upload it to BirdNEt or try using BirdNET GUI

I believe BirdNET's native classifier is already good enough for the majority of north american birds.
 

Another option you can try is HawkEars, which is a classifier made particularly for Canadian birds. Unlike BirdNET, it doesn't have a graphical interface, though.

See full post
event

The Variety Hour: July 2025

You’re invited to the WILDLABS Variety Hour, a monthly event that connects you to conservation tech's most exciting projects, research, and ideas. We can't wait to bring you a whole new season of speakers and...

0
See full post
discussion

Help Shape India’s First Centralized Bioacoustics Database – 5 Min Survey

Do you collect or use sound recordings of habitats or species like birds, frogs, mammals, or insects in India? We need your expertise!Please take 5 minutes to fill out this...

1 2

PROJECT UPDATE: 

We are curating an open-access, annotated acoustic dataset for India! 🇮🇳🌿

This project has two main goals:
🎧 To develop a freely available dataset of annotated acoustic data from across India (for non-commercial use)
📝 To publish a peer-reviewed paper with all community contributors that ensures transparency, credibility, and future usability

Join us: 
1️⃣ Read the FAQs for more details: 🔗 https://drive.google.com/file/u/1/d/1dYRZ579upiU6s-qMrlT1T3iNxbHfNFMi/view?usp=sharing
2️⃣ Fill out the Data Agreement Form to indicate your participation in the project: 🔗 https://docs.google.com/forms/d/e/1FAIpQLSezMdysMokgfP-6UKwnetRpCafTJpj55uvKnDe1G7KvKBbIaQ/viewform

By contributing your field recordings or annotation expertise, you’ll be supporting the future creation of AI-powered tools for wildlife detection and conservation research

This initiative is community-powered and open science-driven. Whether you're a researcher, sound recordist, conservationist, or bioacoustics enthusiast, your input will shape the future of biodiversity research in India.

Contact indiaecocacousticsnetwork@gmail.com for any questions or collaborations!

See full post
Link

Still open for fundings

Still unfunded, we are looking for a lab office, materials, used instruments and to create our educational basement.

0
discussion

Audiomoth and Natterjack Monitoring (UK)

Is anyone aware of anything that has been published, or anyone who is using Audiomoth (or similar) to monitor Natterjack toad calling at night? Or using it to monitor any night-...

15 2

Just out of curiosity (and apologies for resurrecting such an ancient thread) - did you ever get this off the ground? I just saw an article that mentioned how noisy natterjack toads are so I thought I'd google whether anyone's using bioacoustics to monitor them, and this popped up!

Issy (hi again!) - I'm currently trying to finish off a paper based on three years of natterjack survey at WWT Caerlaverock, and am currently involved in a new project this year, where I'm working with ARC with deployments at Saltfleetby, Talacre (just got back from there a few minutes ago), and Woolmer Forest.  So yep, definitely off the ground with this species.  Ta, Carlos

Just to provide an update to my post from 2021. Colleagues and I at the BTO have built a species classifier now for the BTO Acoustic Pipeline for the sound identification of all UK native and non-native frogs and toad species - including Natterjack Toad. 

As with all our audible / bird classifiers, this will be free to use where the results / data can be shared.

We are currently working with researchers at ARC, and a few other individuals to test and refine the identification for particular species before we make this publicly available. 

I would be interested to hear from anyone with sound recordings of Natterjack Toad, who would be happy to try out a beta version of the classifier.

See full post
discussion

Acoustic recording tags for marine mammals (soundscape research)

Does anyone have advice on finding/using commercially available biologgers (or buoy systems) with acoustic sensors for recording sound?We are interested in studying the behavioral...

4 1

Hi There! 

For biologgers, I think there are a few on the market that have been attached to marine mammals. I've provided a few links to ones that I know have been used for understanding the behavioural responses to sound - Dtags made by SMRU and Acousonde tags. 

You could also have a look through The Inventory here on WildLabs for other devices on the market.  

Hope that helps!

Courtney

Hey Maggie, 

@TomKnowles and his team, 2025 awardees, are working on a acoustic tags to be deployed Basking Sharks!

See full post
discussion

I WANT TO TELL YOUR STORY

I create ocean exploration and marine life content on YouTube, whether it be recording nautilus on BRUVs, swimming with endangered bowmouth guitarfish, documenting reef...

3
See full post
discussion

New Group Proposal: Systems Builders & PACIM Designers

Co-Creating Collective Impact Across the Conservation Technology EcosystemDear WILDLABS Community,I am proposing the creation of a new WILDLABS group focused on...

6 2

Hi Chad,

Thanks for the text. As I read it, PACIMs play a role in something else/bigger, but it doesn't explain what PACIMs are or what they look like. Now I've re-read your original post, I'm thinking, maybe I do understand, but then I feel the concept is too big (  an entire system can be part of a PACIM ? ) to get going within a WildLabs group. And you want to develop 10 PACIMS within a year through this group? Don't get me wrong, I am all for some systems change, but perhaps you're aiming too high. 

Hello again sir - PACIMs really mean 'projects' is the way I see it. Each part of the acronym can be seen as a project (if you have an assignment to do, you have a project really).

 

As for your query on 10 projects in 'this' group - I should ask for clarification if you mean particularly acoustics or in any group (I see now this is the acoustics thread after I selected all the groups for this post). If you are asking on acoustics, you're right - I am unsure on 10 as I am not too keen on acoustics yet. If you are asking 10 projects as a whole like 10 projects in the funding and finance group - I believe 10 to be a very reasonable number. Our projects we have co-created are for the most part replicable, rapidly deployable, quickly scalable, fundable through blended finance and more. 

 

Thank you again for the feedback.

Thank you for your reply, Chad

I meant 10 as a whole, indeed. Perhaps you see your post in one group, but since it is tagged for all groups, I assumed you meant 10 in total.

In your first post you explain PACIM stands for "Projects, Assignments, Campaigns, Initiatives, Movements, and Systems", so I understood it as more than just projects. Obviously, many things can be packed into a project or called a project, but then, what does it mean that 'Projects' is part of the list?

Well, if you think 10 projects is doable, then don't let me stop you.

See full post
discussion

Feedback needed for new Tech Concepts

Hi, My name is Gina. I'm currently working on my Bachelor's Thesis and would like some feedback on some concepts I have worked out. The concepts involve new technology for...

1 1

Gina-  Sounds like an interesting thesis topic! I work with bioacoustics in offshore waters and I'd be happy to have a chat and provide feedback-- feel free to message me via Wildlabs. 

See full post
discussion

Improving the performance of the pippyg static bat detector with a remote microphone

UPDATE : thanks to some EXTREMELY useful and positive feedback, I have made revisions to the pippyg design and the results are remarkable. I can now...

20 1

All inputs gratefully received and acknowledged! There will be experimentation with ferrite and resistor combinations when I get my components in. 

The circuit on shipping PCBs has a 220 ohm resistor on the mic and op amp to operate as a low-pass filter to suppress high frequency supply noise. The current to the op amp and mic together are low enough that the 3.3v input drops to around 3.2v beyond the filter, definitely good for both op-amp and microphone. The combination of 44uF and 220ohms should put the 3dB point at 17Hz, so the filtering effect at even 17kHz - 10 octaves above that - is 60dB. The supply to the analog circuit should be very stable and quiet at the frequencies we care about, 20kHz and beyond, which is why the clicking noises with high frequency components was so annoying - it's clearly acoustic, no electrical in origin, it's pretty clear after all the comments on here that, even though SD cards do seem to generate ultrasound, the ceramic capacitors I've been using are the bigger culprit, so I need to address that and report back with an update to this post. 

This has all been, and continues to be, EXTREMELY interesting and educational, and will ultimately lead to big improvements in what is already a great, affordable recorder.

Here is the final, dramatic word. I built a pipistrelle (noisier than a pippyg so a good test) and used non-ceramic capacitors and played with ferrites and 1R resistors. 

Attached are new vs old sonograms, and it's a remarkable improvement. The visual difference is stark, and there is a measured 5dB improvement in the "silence" between Noctule calls. Thanks to everyone who commented on this, this problem was entirely about ceramic capacitors. I have no doubt SD cards do generate ultrasound, but in this case it was being dwarfed by the ultrasound generated by the capacitors. 

Circuit changes shown below - 3x 22uF ceramics removed, 2x 22uF ceramics replaced by 47uF poly/tant, 5x 100nF ceramics replaced by 100nF NP0/C0Gs, 1 ferrite swapped for a 1R - this one may be pointless!

See full post