Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera traps and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data. The AI For Conservation group is intended to unite and inspire all WILDLABS community members—whether already involved in AI for conservation, or not—to understand how to use and/or directly contribute to open-source research and development efforts.

discussion

WILDLABS AWARDS 2025 - Open-Access AI for Marine Mammal Acoustic Detection

We’re excited to announce that Deep Voice has received a 2025 Wildlabs Award to develop a public, web-based platform for marine mammal sound detection and classification. This...

1 4

Hello WildLabs community,

Here's an update on our progress with the AI-based marine mammal sound detection platform.

What We've Accomplished

We've completed our first milestone: a Windows executable for our Burrunan dolphin detection model. This executable allows our AI code to run independently without requiring Python or other dependencies to be installed on the user's machine.

This executable serves as the foundation for our web service - the same packaged code that runs locally will power the online platform, ensuring consistency between our local and web-based offerings.

Current Work

We're currently focused on two main areas:

Web Service Development

We're developing an online platform to make our models accessible via web browsers. The main challenge is finding the right hosting solution that balances cost-effectiveness with handling variable workloads.

Service Optimization

We're developing a more efficient version of our detection algorithms to reduce computational requirements and processing time. This optimisation benefits both the local executables and the planned web service.

Next Steps
Platform Expansion
  • Moving the Windows executable from MVP to a production-ready version
  • Adding macOS support
Model Integration Approaches

We're evaluating two paths for handling multiple species models:

  • Option 1: Create individual executables for each new model we develop. This means every time we complete a new species model, we'll package it as a separate executable.
  • Option 2: Build a single executable that can dynamically load different models. This approach is more complex to develop but offers better scalability and could facilitate integration with existing tools like Raven.
Web Service MVP
  • Developing a basic browser-based interface for model selection and file upload
Model Library

A system that allows users to choose the species detection model for their acoustic data.

 

Here are some Burrunan dolphins!

See full post
discussion

New Group Proposal: Systems Builders & PACIM Designers

Co-Creating Collective Impact Across the Conservation Technology EcosystemDear WILDLABS Community,I am proposing the creation of a new WILDLABS group focused on...

7 2

Hello again sir - PACIMs really mean 'projects' is the way I see it. Each part of the acronym can be seen as a project (if you have an assignment to do, you have a project really).

 

As for your query on 10 projects in 'this' group - I should ask for clarification if you mean particularly acoustics or in any group (I see now this is the acoustics thread after I selected all the groups for this post). If you are asking on acoustics, you're right - I am unsure on 10 as I am not too keen on acoustics yet. If you are asking 10 projects as a whole like 10 projects in the funding and finance group - I believe 10 to be a very reasonable number. Our projects we have co-created are for the most part replicable, rapidly deployable, quickly scalable, fundable through blended finance and more. 

 

Thank you again for the feedback.

Thank you for your reply, Chad

I meant 10 as a whole, indeed. Perhaps you see your post in one group, but since it is tagged for all groups, I assumed you meant 10 in total.

In your first post you explain PACIM stands for "Projects, Assignments, Campaigns, Initiatives, Movements, and Systems", so I understood it as more than just projects. Obviously, many things can be packed into a project or called a project, but then, what does it mean that 'Projects' is part of the list?

Well, if you think 10 projects is doable, then don't let me stop you.

hi chad, its great to hear from you_its really a great idea and impactful journey  in our community

 

having experience in community-led conservation initiatives, working and leading in community based conservation @nature_embassy  as the leader (found on insta and other social pages), 

i will be glad to be part of the group, thankyou

 

See full post
funding

Edge AI Earth Guardians

Hackster is calling on engineers, developers, and technologists worldwide to propose innovative Edge AI solutions that address critical environmental challenges, with a focus on wildlife protection, deforestation...

0
See full post
discussion

Proof of concept - Polar bear detection

This morning at 06:08 local time, the polar bear detector that Lars Holst Hansen and Kim Hendrikse from Wildlife Security Innovations have been working on for more than a year...

4 9

Great work! 
Does your bear detection model avaliable for 3rd party usage? 
I`m working on a prototype project for tracking bears in a zoo enclosure. I`ve trained the yolo11n model on the bears dataset and would like to compare the results.

We sell a product that includes the model, but we don't have a freely downloadable model for that purpose. You could try pytorch wildlife or speciesnet for a version to compare against.

I'm happy to process any of your images and feed you the results if you like?

See full post
discussion

AI TECH FOR CONSERVATION,

in the last 3 months we hosted allander hobbs, an under graduated student from the university of edinbrugh, UK .during his interactive section with the kids on study of...

0
See full post
discussion

Mini AI Wildlife Monitor

Hi All!I've been working on various version of small AI edge compute devices that run object detection and Identification models for ecological monitoring!I've recently been...

20 7

Hi Luke,

I wasn't thinking of turning the RPi on/off, just the energy hungry AI stuff. But if the board needs 1W on its own... ugh.

Just wondering on your last comment, why does the MPPT have to interface with the RasPi? You can have a cheap standalone mppt (or even pwm) charger with a large PV panel(s), and battery/ies, but all the RasPi cares about is its 5V supply from some external source. The monitoring for a SLA/AGM battery bank could be just a voltage drop across a resistor/bridge onto an ADC pin. For LiFePo batteries, you'd need to query their BMS or run a shunt I guess, but some do have bluetooth APIs... So clearly, since you're working on it, there's a good reason (the "bunch of features"?) I'm missing :-) 

 

We want the pi to interface with the charge controller so we can monitor power input and battery usage/status. We've got a few other features for remote scheduling and power control etc. There are absolutely other ways you cant do this, but they are usually big, bulky and have to be hacked together. We're developing something much more compact then what you can build with off the shelf components.

I've got some pictures in my latest blog:

 

Thanks for the informative videos ... wondered if anyone had recommendations for best IR PoE camera for monitoring wildlife at nighttime? my setup is to stream to a frigate server where im running the detection.

See full post
discussion

AI Edge Compute Based Wildlife Detection

Hi all!I've just come across this site and these forums and it's exactly what i've been looking for!I'm based in Melbourne Australia and since finishing my PhD in ML I've been...

21 1

Hi Brian, 

I have a number of videos on my channel about aspects of this project/technology. For training a model for the AI camera see this video. 

https://youtu.be/I69lAtA2pP0

Hi Luke,

Thank you so much for the high-quality content you create , it’s been incredibly valuable. I’m excited to share that a new community group dedicated to Edge Computing has just been launched, and as one of the co-leads, I’d be delighted to invite you to join and participate. 



Link to the group: 

https://wildlabs.net/groups/edge-computing

With your permission, I’d also love to feature your channel and videos in our Learning Resources section, so more members can benefit from your expertise.

Looking forward to your thoughts, :)



Youssef 

See full post
discussion

Free online tool to analyze wildlife images

Hey everyone!We made a free online tool to find animals on single images.Link is here: https://www.animaldetect.com/wildlife-detectorIt works very simple: drop an image -> get...

2 5

Hello Eugene, I just tried your service:


Was wondering how possible will it be to have the option to upload a second image and have a comparison running to let the user know if body patterns are: 'same' or 'different', helping to identify individuals. 

Thanks and kind regards from Colombia,
Alejo

Hey Alejandro, thanks for trying it! :))

The feature you are asking about is called in technical language - animal re-identification. Unfortunately, we currently don't have this technology on Animal Detect. It is quite different from finding animals on the image and guessing their species, so there is no easy way to add it.

In a similar wildlabs post there was a discussion about re-identification of snow leopards - would ocelotes be similar in this context?

Take a look at the discussion here: 

https://wildlabs.net/discussion/individual-identification-snow-leopard
See full post
Link

IDEPROCONA Presentation Video

Hello everyone, I’m glad to join this group and learn from your inspiring experiences. At INITIAM ASBL, we are implementing IDEPROCONA – Digital Innovation in Ecosystems for Protection and Conservation of Nature in Eastern DRC (Virunga-Itombwe landscape). With love;...

0
discussion

Excited about AI in Conservation

Hi everyone! I am so excited to be a part of this group for two reasons. First, as part of my job, i manage computers and other devices. On my own time, I am a Vertical Market AI...

1 1

Very inspiring! I completely agree that AI should help free conservationists from repetitive tasks so they can focus on strategy and fieldwork. In our IDEPROCONA project in Eastern DRC, we are also exploring how AI could support drones and mobile apps to detect illegal logging and wildlife threats faster. Excited to learn from your experience!

See full post
discussion

I WANT TO TELL YOUR STORY

I create ocean exploration and marine life content on YouTube, whether it be recording nautilus on BRUVs, swimming with endangered bowmouth guitarfish, documenting reef...

5
See full post
discussion

AI For waste Management.

AI should definitely be a tool for enhancing best practices in waste management and help in amplifying advocacy in waste management

1 0

I think it is time for exploring many AI s application around our careers and projects. With waste management too!  this is also very interesting

See full post
discussion

Tracking Individual Whales in 360-degree Videos

Hi WILDLABS Community,My colleagues and I are currently using Insta360 cameras to gather underwater video footage of beluga whales in the Churchill River estuary (Canada). While...

5 1

Hey Courtney! I've just sent you an email about coming on Variety Hour to talk about your work :) Looking forward to hearing from you!

Have you tried using Insta360's DeepTrack feature on your Studio desktop app?  We have used it for similar use cases and it worked well. I would love to hear if it works for your science objectives. We are also experimenting and would love to know your thoughts. :) https://youtu.be/z8WKtdPCL_E?t=123

Hi @CourtneyShuert 

 

We support NOAA with AI for individual ID for belugas (but from aerial and from lateral surface too). If some of our techniques can be cross-applied please reach out: jason at wildme dot org

 

Cheers,

Jason

See full post
discussion

🐸 WILDLABS Awards 2025: Open-Source Solutions for Amphibian Monitoring: Adapting Autonomous Recording Devices (ARDs) and AI-Based Detection in Patagonia

We’re excited to launch our WILDLABS-funded project to adapt open-source recording hardware and AI tools to help monitor amphibians, with an initial focus on one of South America'...

3 7

Project Update — Sensors, Sounds, and DIY Solutions (sensores, sonidos y soluciones caseras)



We continue making progress on our bioacoustics project focused on the conservation of Patagonian amphibians, thanks to the support of WILDLABS. Here are some of the areas we’ve been working on in recent months:
(Seguimos avanzando en nuestro proyecto de bioacústica aplicada a la conservación de anfibios patagónicos, gracias al apoyo de WildLabs.Queremos compartir algunos de los frentes en los que estuvimos trabajando estos meses)

1. Hardware

One of our main goals was to explore how to improve AudioMoth recorders to capture not only sound but also key environmental variables for amphibian monitoring. We tested an implementation of the I2C protocol using bit banging via the GPIO pins, allowing us to connect external sensors. The modified firmware is already available in our repository:

👉 https://gitlab.com/emiliobascary/audiomoth

We are still working on managing power consumption and integrating specific sensors, but the initial tests have been promising.

(Uno de nuestros principales objetivos fue explorar cómo mejorar las grabadoras AudioMoth para que registren no sólo sonido, sino también variables ambientales clave para el monitoreo de anfibios. Probamos una implementación del protocolo I2C mediante bit banging en los pines GPIO, lo que permite conectar sensores externos. La modificación del firmware ya está disponible en nuestro repositorio:
https://gitlab.com/emiliobascary/audiomoth
Aún estamos trabajando en la gestión del consumo energético y en incorporar sensores específicos, pero los primeros ensayos son alentadores.)



2. Software (AI)

We explored different strategies for automatically detecting vocalizations in complex acoustic landscapes.

BirdNET is by far the most widely used, but we noted that it’s implemented in TensorFlow — a library that is becoming somewhat outdated.

This gave us the opportunity to reimplement it in PyTorch (currently the most widely used and actively maintained deep learning library) and begin pretraining a new model using AnuraSet and our own data. Given the rapid evolution of neural network architectures, we also took the chance to experiment with Transformers — specifically, Whisper and DeltaNet.

Our code and progress will be shared soon on GitHub.

(Exploramos diferentes estrategias para la detección automática de vocalizaciones en paisajes acústicos complejos. La más utilizada por lejos es BirdNet, aunque notamos que está implementado en TensorFlow, una libreria de que está quedando al margen. Aprovechamos la oportunidad para reimplementarla en PyTorch (la librería de deep learning con mayor mantenimiento y más popular hoy en día) y realizar un nuevo pre-entrenamiento basado en AnuraSet y nuestros propios datos. Dado la rápida evolución de las arquitecturas de redes neuronales disponibles, tomamos la oportunidad para implementar y experimentar con Transformers. Más específicamente Whisper y DeltaNet. Nuestro código y avances irán siendo compartidos en GitHub.)

3. Miscellaneous

Alongside hardware and software, we’ve been refining our workflow.

We found interesting points of alignment with the “Safe and Sound: a standard for bioacoustic data” initiative (still in progress), which offers clear guidelines for folder structure and data handling in bioacoustics. This is helping us design protocols that ensure organization, traceability, and future reuse of our data.

We also discussed annotation criteria with @jsulloa to ensure consistent and replicable labeling that supports the training of automatic models.

We're excited to continue sharing experiences with the Latin America Community— we know we share many of the same challenges, but also great potential to apply these technologies to conservation in our region.

(Además del trabajo en hardware y software, estamos afinando nuestro flujo de trabajo. Encontramos puntos de articulación muy interesantes con la iniciativa “Safe and Sound: a standard for bioacoustic data” (todavía en progreso), que ofrece lineamientos claros sobre la estructura de carpetas y el manejo de datos bioacústicos. Esto nos está ayudando a diseñar protocolos que garanticen orden, trazabilidad y reutilización futura de la información. También discutimos criterios de etiquetado con @jsulloa, para lograr anotaciones consistentes y replicables que faciliten el entrenamiento de modelos automáticos. Estamos entusiasmados por seguir compartiendo experiencias con Latin America Community , con quienes sabemos que compartimos muchos desafíos, pero también un enorme potencial para aplicar estas tecnologías a la conservación en nuestra región.)

Love this team <3
We would love to connect with teams also working on the whole AI pipeline- pretraining, finetuning and deployment! Training of the models is in progress, and we know lots can be learned from your experiences!

Also, we are approaching the UI design and development from the software-on-demand philosophy. Why depend on third-party software, having to learn, adapt and comply to their UX and ecosystem? Thanks to agentic AI in our IDEs we can quickly and reliably iterate towards tools designed to satisfy our own specific needs and practices, putting the researcher first. 

Your ideas, thoughts or critiques are very much welcome!

Kudos for such an innovative approach—integrating additional sensors with acoustic recorders is a brilliant step forward! I'm especially interested in how you tackle energy autonomy, which I personally see as the main limitation of Audiomoths  

Looking forward to seeing how your system evolves!

See full post
funding

Klarna’s AI for Climate Resilience Program

The AI for Climate Resilience Program is a new initiative by Klarna that aims to support pioneering projects that leverage artificial intelligence for climate adaptation in underserved, climate-vulnerable regions. 

1
See full post
discussion

What metadata is used from trail camera images?

So, this week I have started looking into adding new and more fine-grained details and methods to the result page og Animal Detect. Including the CameraTrapDP format (coming soon...

2 1

Hi Hugo, it's great that you are thinking about adding metadata features to Animal Detect! I'll share what I think would be useful from my perspective, but I think there is a lot of variation in how folks handle their image organization. 

Time and date are probably the most important features. I rename my image files using the camtrapR package, which uses Exiftool to read file metadata and append date and time to the filename. I find this method to be very robust because of the ability to change datetimes if needed -- for example, if the camera was programmed incorrectly, you can apply a timeshift to ensure they are correct in the filenames. Are you considering adding Exif capability directly to Animal Detect? Otherwise, I think that having a tool to parse filenames would be very helpful, where users could specify which parts of the filename correspond to camera site, date, time, etc., so that this information is included in downstream processing tasks.

I have found it frustrating that information such as camera name and temperature are not included in file metadata by many camera manufacturers. I have used OCR to extract the information in these cases, but it requires a bit of manual review, and I wouldn't say this is a regular part of my workflow.

Camera brand and model can be useful for analysis, and image dimensions and burst sequence can be helpful for computer vision tasks. 

Hope this helps!

Cara

Thank you for your reply! 

It surely helps, we have use exif for a while to read metadata from images, when there is information available. Could be nice to maybe see if we could “write” some of the data into the metadata of the image, instead of just reading. Really good idea with the filename changes and structure. I will add it to a list of possible improvements and see if/how we could implement it. 

Again, thanks for the feedback 😊 

See full post
discussion

How do you tackle the anomalous data during the COVID period when doing analysis?

COVID, as devastating as it for humans, significantly reduced anthropogenic pressures in all ecological systems since they were confined to their homes. My question is as the...

3 0

To clarify, are you talking about a model that carries out automated detection of vocalizations? or a model that detects specific patterns of behavior/movement? I would suspect that the former is not something that may be impacted while training as the fundamental vocalizations/input is not going to change drastically (although see Derryberry et al., where they show variation in spectral characteristics of sparrows at short distance pre and post-covid lockdowns). 

I'm specifically referring to movement of animals affected by anthropogenic factors. My question has nothing to do with vocalisations. 

Humans were essentially removed from large sections of the world during covid and that surely had some effects on wildlife movements, or at least I am assuming it did. But that would not be the regular "trend". If I try to predict the movement of a species over an area frequented by humans, that surely comes into the picture - and so does their absence. 

My question is very specific to dealing with data that has absence (or limited interference) of humans during the covid period in all habitats.

You could just throw out that data, but I think you'd be doing yourself a disservice and missing out on some interesting insights. Are you training the AI with just pre-COVID animal movement data or are you including context on anthropogenic factors as well? Not sure if you are looking at an area that has available visitor/human population data, but if you include that with animal movement data across all years it should net out in the end.

See full post