Acoustic sensors enable efficient and non-invasive monitoring of a wide range of species, including many that are difficult to monitor in other ways. Although they were initially limited in application scope largely due to cost and hardware constraints, the development of low-cost, open-source models like the Audiomoth in recent years has increased access immensely and opened up new avenues of research. For example, some teams are using them to identify illicit human activities through the detection of associated sounds, like gunshots, vehicles, or chainsaws (e.g. OpenEars).
With this relatively novel dimension of wildlife monitoring rapidly advancing in both marine and terrestrial systems, it is crucial that we identify and share information about the utility and constraints of these sensors to inform efforts. A recent study identified advancements in hardware and machine learning applications, as well as early development of acoustic biodiversity indicators, as factors facilitating progress in the field. In terms of limitations, the authors highlight insufficient reference sound libraries, a lack of open-source audio processing tools, and a need for standardization of survey and analysis protocols. They also stress the importance of collaboration in moving forward, which is precisely what this group will aim to facilitate.
If you're new to acoustic monitoring and want to get up to speed on the basics, check out these beginner's resources and conversations from across the WILDLABS platform:
Three Resources for Beginners:
- Listening to Nature: The Emerging Field of Bioacoustics, Adam Welz
- Ecoacoustics and Biodiversity Monitoring, RSEC Journal
- Monitoring Ecosystems through Sound: The Present and Future of Passive Acoustics, Ella Browning and Rory Gibb
Three Forum Threads for Beginners:
- AudioMoth user guide | Tessa Rhinehart
- Audiomoth and Natterjack Monitoring (UK) | Stuart Newson
- Help with analysing bat recordings from Audiomoth | Carlos Abrahams
Three Tutorials for Beginners:
- "How do I perform automated recordings of bird assemblages?" | Carlos Abrahams, Tech Tutors
- "How do I scale up acoustic surveys with Audiomoths and automated processing?" | Tessa Rhinehart, Tech Tutors
- Acoustic Monitoring | David Watson, Ruby Lee, Andy Hill, and Dimitri Ponirakis, Virtual Meetups
Want to know more about acoustic monitoring and learn from experts in the WILDLABS community? Jump into the discussion in our Acoustic Monitoring group!
Header image: Carly Batist
- @Tayebwa
- | GT
Am a wildlife conservationist from Uganda with interests in birds and wetlands
- 0 Resources
- 0 Discussions
- 2 Groups
Centre national de la recherche scientifique (CNRS)
Behavioural ecologist @CNRS in France - working on large mammals in Europe and Africa



- 0 Resources
- 9 Discussions
- 6 Groups
- @williamkingwill
- | Mr
Hi, my name is William. I am Senior Data Scientist/ Remote Sensing Engineer with experience in GIS, Machine Learning, Systems Engineering, Data Science Pipelines. I am motivated and passionate about using my skills for wildlife and biodiversity conservation.
- 0 Resources
- 0 Discussions
- 7 Groups
I'm a software developer. I have projects in practical object detection and alerting that is well suited for poacher detection and a Raspberry Pi based sound localizing ARU project



- 0 Resources
- 406 Discussions
- 7 Groups
Biodiversity expert
- 0 Resources
- 0 Discussions
- 1 Groups

- 0 Resources
- 0 Discussions
- 5 Groups
WILDLABS & Fauna & Flora
I'm the Platform and Community Support Project Officer at WILDLABS! Speak to me if you have any inquiries about using the WILDLABS Platform or AI for Conservation: Office Hours.



- 26 Resources
- 41 Discussions
- 6 Groups
- @TaliaSpeaker
- | She/her
WILDLABS & World Wide Fund for Nature/ World Wildlife Fund (WWF)
I'm the WILDLABS Research Specialist at WWF-US



- 23 Resources
- 62 Discussions
- 25 Groups
WILDLABS & Wildlife Conservation Society (WCS)
I'm the Bioacoustics Research Analyst at WILDLABS. I'm a marine biologist with particular interest in the acoustics behavior of cetaceans. I'm also a backend web developer, hoping to use technology to improve wildlife conservation efforts.





- 27 Resources
- 34 Discussions
- 34 Groups
- @SamuelNtimale
- | Him
Samuel Nti, a conservationist at APLORI, uses bioacoustics to protect endangered birds. His current work focuses on the African Grey Parrot in Nigeria, where he employs Passive Acoustic Monitoring (PAM) to inform conservation strategies.

- 0 Resources
- 1 Discussions
- 7 Groups
Worked as a mechanical engineer for a defence co, then software engineer, then for a research lab specialising in underwater robotics.



- 1 Resources
- 144 Discussions
- 16 Groups
Sustainability Manager for CERES Tag LTD. An animal health company; animal monitoring, conservation, & anti-poaching/ rural crime. Wildlife, livestock, equine & companion. #CeresTrace #CeresWild #CeresRanch





- 2 Resources
- 20 Discussions
- 24 Groups
HawkEars is a deep learning model designed specifically to recognize the calls of 328 Canadian bird species and 13 amphibians.
13 May 2025
The Biological Recording Company's ecoTECH YouTube playlist has a focus on webinars about Bioacoustic monitoring.
29 April 2025
This paper includes, in its supplementary materials, a helpful table comparing several acoustic recorders used in terrestrial environments, along with associated audio and metadata information.
15 April 2025
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective!
1 April 2025
PhD position available at the University of Konstanz in the Active Sensing Collective Group!
28 March 2025
InsectSet459 - the first large-scale open dataset of insect sounds, featuring 26,399 audio clips from 459 species of Orthoptera and Cicadidae.
24 March 2025
Naturalis is looking for a postdoc in AI for Ultrasonic Bioacoustic Monitoring
24 March 2025
Postdoctoral opening in the Bioacoustics and Behavioral Ecology Lab at Syracuse University
14 March 2025
Funding
Species identification from audio, focused on birds, amphibians, mammals and insects from the Middle Magdalena Valley of Colombia.
12 March 2025
Building species-specific habitat models for forest bird species using state of the art methods and remote sensing layers
28 February 2025
Osa Conservation is launching our inaugural cohort of the ‘Susan Wojcicki Research Fellowship’ for 2025, worth up to $15,000 per awardee (award value dependent on project length and number of awards given each year)....
10 February 2025
workshop introduced participants to the power of bioinformatics and next-generation sequencing (NGS) in conservation biology.
10 February 2025
June 2025
July 2025
September 2025
event
October 2025
November 2025
event
May 2025
event
event
event
April 2025
event
March 2025
event
61 Products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Karibu sana |
|
Acoustics, East Africa Community, Conservation Tech Training and Education | 9 months 2 weeks ago | |
Hi César, Is there a geographic area you're limited to? That would help to narrow down options. I would recommend reaching out perhaps to the Kitzes Lab, Sound Forest... |
|
Acoustics, Citizen Science, Community Base, Early Career, Marine Conservation | 9 months 2 weeks ago | |
Retweet on OpenSoundscape - great package and documentation that allows you to build your own CNNs! Note that this is in Python though. There are tons of bioacoustics... |
|
Acoustics, Data management and processing tools | 9 months 4 weeks ago | |
Hi @BrettMargoSupplies this product seems like a fantastic addition to The Inventory, WILDLABS' wiki-style discovery platform for conservation technology. Adding a product is... |
|
Acoustics | 9 months 4 weeks ago | |
Hi everyone, I am conducting a research project as part of my MSc in Environment and Development at the London School of Economics. I... |
|
Acoustics, AI for Conservation, Autonomous Camera Traps for Insects, Ethics of Conservation Tech | 9 months 4 weeks ago | |
Yeah that would be great - I have done a little looking into it today and I have some ideas. I'd love to collab. I will DM you |
+16
|
Acoustics | 9 months 4 weeks ago | |
Many thanks. That is a useful place to start. I don't think there is any shortage of blackbirds in the UK. And they nest in our garden every year. I don't remember the last time I... |
|
Acoustics | 10 months 1 week ago | |
Hi Luke,As Matthew has suggested, the best way is always to run a test run whenever you are doing something new. Put your recorders out for one or two days and see how they... |
|
Acoustics | 10 months 1 week ago | |
My organization has some Wildlife Acoustics SongMeter SM4 units that we’re looking to sell. We need more mobile and cost-effective ARUs for... |
|
Acoustics | 10 months 1 week ago | |
Hi Chris - I missed this entire dialog, just to say I have successfully recorded all the"quiet" bats using pippyg / pippistrelle, but it inevitably turns into an SNR issue, and... |
|
Acoustics | 10 months 2 weeks ago | |
Fantastic!! |
|
Acoustics, AI for Conservation, Animal Movement, Build Your Own Data Logger Community, Community Base, Early Career, Ethics of Conservation Tech, Marine Conservation, Open Source Solutions | 11 months 2 weeks ago | |
If you search Digikey for a 'strain relief' you should be able to find a rubber grommet that will hold that mic without any additional machining. A blob of silicone will adhesive... |
|
Acoustics | 11 months 4 weeks ago |
Advice needed for accessible acoustic monitoring
15 May 2025 10:50am
20 May 2025 6:51pm
Hi Nev, from a high level, have you considered acoustic indices and the biodiversity index?
21 May 2025 2:50pm
Hi Mona. Yes, this is what we certainly what we are considering, but how to measure these indices by the hands of non-experts in the field is the advice that I need from the WL network. Cheers
Need advice for running BirdNET on big data
11 May 2025 12:14pm
17 May 2025 10:44am
I haven't tried BirdNET analyzer, but with regards to running any bigdata/ML processing, my advice would be to look at something like Google Colab instead of your own laptop.
Hope this helps.
19 May 2025 12:22am
Would that be able to process locally stored acoustic data?
One of the great things about birdnet analyzer is that it is local - it doesn't require uploading terabytes of data into the cloud, which would be expensive, take forever, and likely have some transfer errors in areas with poor internet connection (like the tropics where I do my research).
Applied Bioacoustics in Conservation and Practice
16 May 2025 4:21pm
Help Shape India’s First Centralized Bioacoustics Database – 5 Min Survey
13 May 2025 3:01pm
HawkEars: a high-performance bird sound classifier for Canada
13 May 2025 11:00am
Prospective NSF INTERN
11 February 2025 10:00am
8 May 2025 8:51am
My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning powered passive acoustic monitoring focusing on wild Bornean orangutans (and other primates). I am reaching out because as a student with a National Science Foundation Graduate Research Fellowship, I am eligible to take advantage of the NSF INTERN program which supports students to engage in non-academic internships through covering a stipend and other expenses, with the only caveat being that the internship must be in-person and not remote. I was wondering if any organizations in conservation technology would be interested in a full-time intern that would be coming in with their own funding?
In addition to experience with machine learning and acoustics through training a convolutional neural network for my research, I also have worked with GIS, remote sensing, and animal movement data through other projects. Further, I have experience in community outreach both in and outside of academic settings, as I previously worked for the Essex County Department of Parks and Recreation in New Jersey for 3 years where I created interpretive signs, exhibits, newsletters, brochures, and social media posts. Now while doing my fieldwork in Indonesia, I have led hands-on trainings in passive acoustic monitoring placement and analysis as well as given talks and presentations at local high schools and universities.
I would love to be able to use this opportunity (while the funding still exists, which is uncertain moving forward due to the current political climate in the US) to exercise and develop my skills at a non-academic institution in the conservation technology sphere! If anyone has any suggestions or is part of an organization that would be interested in having me as an intern, please contact me here or via my email: fshort@bu.edu geometry dash. Thank you!
Hi Frank, your work sounds incredibly valuable and well-aligned with current needs in conservation tech. With your strong background in machine learning, acoustics, GIS, and outreach, you’d be an asset to many organizations. I’d recommend looking into groups like Rainforest Connection, Wildlife Acoustics, or the Conservation Tech Directory (by WILDLABS)—they often work on acoustic monitoring and might be open to in-person internships, especially with funding already in place. Best of luck finding the right match—your initiative is impressive!
Technology in Wildlife Welfare Workshop (in-person, UK)
6 May 2025 7:46pm
Submit Living Data Abstracts By 25 May for Listen to the Future: Mobilizing Bioacoustic Data to Meet Conservation Goals
5 May 2025 8:02pm
19 May 2025 6:36pm
Song of The Cricket
4 May 2025 11:01am
'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes
5 February 2025 5:55pm
8 February 2025 4:29pm
Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.
2 May 2025 2:59pm
Hi everyone, following up here with a summary of our workshop!
The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador. Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.
Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.
The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.
We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.
Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy. All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching.
Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.
Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.
- The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
- Keep the objectives, number of packages, and size of sample datasets to a minimum.
- Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.
I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.
2 May 2025 2:59pm
A standard for bioacoustic data - Safe and Sound
2 May 2025 9:51am
Remote Bat detector
8 April 2025 3:25pm
12 April 2025 10:16am
You will likely need to have an edge-ML set-up where a model is running in real-time and then just sending detections. Birdweather PUC and haikubox do this running birdnet on the edge, and send over bluetooth/wifi/sms- but you'd have to have these networks already set up in places you want to deploy a device. Which limits deployments in many areas where there is no connectivity
13 April 2025 3:13pm
So we are building some Bugg devices on licence from Imperial College, and I am making a mod to change the mic out for a version of out Demeter microphone. This should mean we can use it with something like BattyBirdNet, as its powered by a CM4. Happy to have a chat if you are interested. Else it will likely need a custom solution, which can be quite expensive!
30 April 2025 2:43pm
There are a lot of parameters in principle here. The size of the battery. How much time in the field is acceptable before a visit? Once a week? Once a month? How many devices you need? How small does the device and battery have to be etc. I use vpns over 4G USB sticks.
I ask because in principle I've build devices that can retrieve files remotely and record ultrasonic. Though the microphones I tested with (Peterssen) are around 300 euros in price. But I've been able to record USB frequencies and also I believe it will be possible to do tdoa sound localization with the output files if they can be heard on 4x recorders.
But we make commercial devices and such a thing would be a custom build. But it would be nice to know what the demand for such a device is to know at which point it becomes interesting.
ecoTECH YouTube webinar playlist by Biological Recording Company
29 April 2025 1:52pm
Reducing wind noise in AudioMoth recordings
23 June 2020 2:26am
28 April 2025 1:04pm
Just following up on this, we are suffering from excessive wind noise in our recordings. We have bought some dead cats, but our audiomoths are in the latest Dev case (https://www.openacousticdevices.info/audiomoth).
In your collective experience, would anyone recommend trying to position the dead cat over the microphone on the Audiomoth itself, or covering the entry port into the device, from either the inside or the outside?
Cheers,
Tom
28 April 2025 10:23pm
As reported in this thread:
I have used Røde Dead Kitten wind jammers wrapped around the original AudioMoth cases.
28 April 2025 10:28pm
Hi Tom! I think the furry windjammer must be outside the casing to have the desired effect. It can be a bit tricky having this nice furry material that birds and other critters might be attracted to. It may be possible to make an outer "wire cage" to protect the wind jammer. We once had to do this to protect a DIY AudioMoth case against foxes wanting to bite the case (no wind jammer then). You may however create new wind issues with wind noise created by this cage... No-one said it had to be simple!
Experience with AudioMoth Dev for Acoustic Monitoring & Localization?
3 April 2025 2:40pm
28 April 2025 1:41pm
Hi Walter,
Thanks for your reply! It looks like the experiments found very minor time offsets, which is encouraging. Could you clarify what you mean by a "similar field setup"?
In my project, I plan to monitor free-ranging animals — meaning moving subjects — over an area of several square kilometers, so the conditions won't be exactly similar to the experimental setup described.
Given that, would you recommend using any additional tools or strategies to improve synchronization or localization accuracy?
28 April 2025 1:55pm
Hi Ryan,
Thanks for your reply! I'm glad to hear that the AudioMoth Dev units are considered powerful.
Have you ever tried applying multilateration to recordings made with them? I would love to know how well they perform in that context.
On a more technical note, do you know if lithium batteries (such as 3.7V LiPo) can provide a reliable power supply for Dev units in high temperature environments (around 30–50°C)?
Thanks,
Dana
28 April 2025 8:03pm
Hi Lana,
"similar field setup" means that the vocalizing animal should be surrounded by the recorders and you should have at least 4 audiomoths recording the same sound, then the localization maths is easy (in the end it is a single line of code). With 3 recorders that receive the sound localization is still possible but a little bit more complicated. With 2 recorders you get only some directions (with lift-right ambiguity).
Given the range of movements and assuming that you do not have a huge quantity of recorders do 'fence' the animals, I would approach the tracking slightly different. I would place the Audiomoth in pairs using a single GPS receiver powered by one recorder but connect the PPS wire also to the other recorder. Both recorders separated by up to 1 m are facing the area of interest. For the analysis, I would then use each pair of recorders to estimate the angle to the animal. If you have the the same sound on two locations, you will have 2 directions, which will give you the desired location. The timings at the different GPS locations may result in timing errors, but each direction is based on the same clock and the GPS timing errors are not relevant anymore. It you add a second microphone to the Audiomoths you can improve the direction further. If you need more specific info or char about details (that is not of general interest) you can pm me.
Safe and Sound: a standard for bioacoustic data
7 April 2025 11:05pm
23 April 2025 4:19pm
Fantastic! Can't wait to hear updates.
Introducción al uso de Kaleidoscope Pro para Murciélagos (Principiante)
18 April 2025 3:50pm
Overview of terrestrial audio recorders with associated metadata info
15 April 2025 2:53pm
scikit-maad community
8 August 2024 10:16am
19 February 2025 10:26am
Hello!
I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?
s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')
Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')
full_time = Sxx.shape[1] # number of time samples in spectrogram
j_count = 5 # number of chunks we want to break it into
window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk
ACI_tot = 0
for i in range(j_count):
_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])
ACI_tot = ACI_tot + int(ACI)
This gives ACI_tot = 1516
6 March 2025 11:07am
Hi all,
I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...
Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.
I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.
Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.
Any advice would be greatly appreciated, thanks!
Kind regards,
Jonathan
11 April 2025 10:08pm
We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.
In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.
Releases · scikit-maad/scikit-maad · GitHub
Open-source and modular toolbox for quantitative soundscape analysis in Python - Releases · scikit-maad/scikit-maad
The boring fund: Standardizing Passive Acoustic Monitoring (PAM) data - Safe & sound
27 January 2025 3:47pm
8 February 2025 11:00pm
This is such an important project! I can't wait to hear about the results.
12 February 2025 4:15pm
Hey Sanne, awesome - we definitely need a consistent metadata standard for PAM.
If you haven't already, I would suggest sharing this on the Conservation Bioacoustics Slack channel and the AI for Conservation Slack channel. You would reach a lot of active users of PAM, including some folks who have worked on similar metadata efforts.
If you're not a member of either one of those, DM me your preferred email address and I'll send you an invite!
7 April 2025 11:07pm
Hello everyone,
Thank you all for your contribution!
You can read some updates about this project in this post.
Julia
DCASE2025 and BioDCASE
Field-Ready Bioacoustics System in Field Testing
2 April 2025 10:38am
3 April 2025 3:38pm
Hi Carly,
Thanks so much for your thoughtful message—and for introducing me to Freaklabs! BoomBox looks awesome, and it’s exciting to see how closely our goals align. There’s definitely potential for collaboration, and I’d be happy to chat more. Their system is super efficient and I think both of our systems have a place in this space.
Affordability and reliability were key considerations when I started building Orpheus. I wanted to create something rugged enough to survive in the field year-round while still being accessible for conservationists with limited budgets. The full-featured unit is €1500, and the basic model is €800. That pricing reflects both the hardware and the considerable time I’ve spent writing and refining the system—it’s all about balancing performance, durability, and keeping it sustainable for the long term.
Even the base unit is more than just a playback device. It logs every playback event, duration, and species, with enough onboard storage for two years of data, and it automatically converts the logs to line protocol for easy integration into platforms like InfluxDB.
On top of that, Orpheus actively logs and graphs temperature, humidity, atmospheric pressure, and battery voltage. During deep sleep, it interpolates the environmental data to preserve meaningful trends without wasting energy. You can view any of these on it's 5" touch screen or view it in the cross-platform app that will support both Android and IOS once I'm done programming it.
As for audio specs:
- Recording is supported up to 96kHz,
- Playback is full 24-bit, both MP3 and WAV formats
- The system currently supports recording audio clips, reviewing them, and even adding those clips directly to playlists on the device.
That said, for bat research, I know ultrasonic capability is essential. While the current hardware doesn’t capture over 100kHz, I’ve already done the research and identified alternative audio interfaces that would support that range. If that’s a need researchers are interested in, I’d be open to building out a dedicated version to meet those requirements.
Power-wise, it runs indefinitely on solar, even under partly cloudy conditions. It uses a LiFePO₄ battery, and depending on usage, it can operate for up to two weeks on battery alone. It also supports external power from 12V or 24V systems, and solar input from 12V to 70V, so it’s pretty adaptable to various field setups. it also can operate from -5 to 70C (still testing that), but the hardware should be capable according to specs. Your correct though in places like the rain forest that could be challenging and an alternative would need to be considered.
The software is written modularly to allow for expansion based on user needs. For instance, I’ve already integrated support for a rain sensor that can pause playback if the user chooses that, and could easily include PIR, microwave, or other sensors for more specialized triggers.
Regarding durability, I’m currently testing mesh cable sheathing to deter rodents and other wildlife from chewing the wires—this was a concern raised by one of the teams using Orpheus, and I’m designing around it.
Also, Orpheus includes a seasonal scheduling engine—you can define your own seasons (like Migration, Breeding, etc.) and assign unique playback playlists to each. The device uses astronomical data (sunrise/sunset) based on your provided lat/lon and time zone, and automatically adjusts timing offsets like “1 hour before sunrise.” The goal is truly fire-and-forget deployment.
I'm open to adding any features or sensors that might be useful within reason.
I’m curious though, what specs would make a recording device for bats an indispensable tool? What features don’t already exist on the market that should?
Warm regards,
Travis
5 April 2025 4:03pm
I love the look of the system! We almost called our new sensor Orpheus, but decided against it as there is already a microphone named that! I'd love to see a bit more about the technical implementation! Is this running off of a CM5 or something different?
6 April 2025 1:45pm
Hi Ryan, hmm, I had no idea there was a microphone named that. I thought about how it’s used to lure birds for netting, and I like Greek Mythology. I thought it was a perfect fit, but hmm, May have to change the name. I considered using a CM, but i wanted the system to be as efficient as possible. I am using a RPI Zero 2 W with emmc. To ensure the UI stays responsive I used some backend tricks like thread pooling. It works well and resources stay in check. The challenging part is ensuring thread handling is done gracefully and carefully to prevent race conditions. What sort of sensor have you been developing?
Looking for PhD position
6 April 2025 6:12am
Nature Tech for Biodiversity Sector Map launched!
1 April 2025 1:41pm
4 April 2025 1:57pm
PhD position on bat acoustics and collective behavior
28 March 2025 1:05pm
WILDLABS AWARDS 2024 - BumbleBuzz: automatic recognition of bumblebee species and behaviour from their buzzing sounds
12 April 2024 8:37am
12 April 2024 8:41pm
Super great to see that there will be more work on insect ecoacoustics! So prevalent in practically every soundscape, but so often over-looked. Can't wait to follow this project as it develops!
17 April 2024 10:23am
Thanks Carly! I will keep anyone interested in this project posted on this platform. Cheers
26 March 2025 8:08pm
Really looking forward to following this project. I'm very curious how you'll be able to tease out different species, particularly among species that feature a variety of worker sizes.
Nature Tech Unconference - Anyone attending?
8 March 2025 12:11pm
15 March 2025 8:28am
Definitely!
21 March 2025 12:07pm
The Futures Wild team will be there :)
26 March 2025 7:54pm
Yep see you on friday
InsectSet459: an open dataset of insect sounds for bioacoustic machine learning
24 March 2025 5:46pm
Postdoctoral fellow in AI for Ultrasonic Bioacoustic Monitoring
24 March 2025 3:31pm
Transfer learning with BirdNet for avian and non-avian detections
23 February 2025 12:18pm
10 March 2025 11:47am
Hi ,
We have been working on creating custom classifiers for frogs in India and have moderate success in this effort. Our workflow is simple - created around 100 3s snippets for each species class (tip: make sure you include only species specific calls obtained from the same PAM data and ensure a decent mix of good and medium quality sound files) in Raven Pro and ran the custom classifier in BirdNET GUI with the default settings. This gives out decent output - although the 'cocktail party problem' is a big issue in species rich regions like the tropics. Species that tend to call in chorus along with many other frog species are hard to be trained. We are currently looking out for solutions to address this. Would like to know how others are dealing with this issue as well.
Thanks
10 March 2025 5:40pm
Hi, ya'll! Yes, we've found that building custom classifiers on top of bird classifier embeddings (including BirdNET) often works very well!
We originally reported good performance on (frequency shifted) bats, anurans, and marine mammals. We've repeatedly seen good performance on real-world passive acoustic data, as well.
Alade Allen-Ankins recently has a paper applying birdnet embeddings to Australian anurans.
Kath, et al also recently used transfer learning from BirdNet embeddings to obtain a large improvement on Anuraset's baseline score. (F1-macro score improved from 0.378 in the original paper to 0.588 using transfer from bird embeddings.)
Our group should also have a new paper up shortly with practical suggestions for improving transfer learning even further. At the very least, transfer learning should provide a much faster way to collect and curate training data than direct human annotation. I highly recommend it as a first step on new problems: You will likely get a good-enough classifier for your needs very quickly, and if not, you'll produce a very useful dataset for further work much more quickly than direct annotation would allow.
19 March 2025 7:18am
Hi Danielle,
A friend of mine was having a fellowship at K Lisa Yang Center and she successfully built a classifier for Javan Slow Loris. So far I only heard she only made one for sonic call, not sure if it is possible for ultrasonics.
20 May 2025 11:16am
Thanks for your responses Christos and Carly
The FIA is used by forestry managers, NGOs, communities, indigenous groups and other non-experts to measure a proxy for biodiversity in the forests that they manage and provide information of management. The forest composition and structure as microhabitats in the forest are most correlated with overall biodiversity (fungi, plants, animal). Adding a measure has been used as an education component thus far, but improving on this would potentially increase users awareness of key vocal species. Anything that HCVN recommends for development must be accessible, simple to low-cost. Acoustic sensors and data processing with BirdNET or similar would potentially be inaccessible for some users communities so I was enquiring within the Wildlabs community on suitable approaches that could provide a proxy measure of biodiversity for better forest management.
Happy to jump on a call to explain further (based in UK)
Cheers