Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.

discussion

Mini AI Wildlife Monitor

Hi All!I've been working on various version of small AI edge compute devices that run object detection and Identification models for ecological monitoring!I've recently been...

9 1

Hi Ross,

Nice project, thanks for sharing! 

How does it perform in the field? Have you tested out power usage and battery life etc? 

Happy to help where I can with the IMX500, I've got a discord Server linked on my youtube channel. 

 

 

Aloha Luke,

This is an amazing tool that you have created. Are your cameras availble for purchase? We use a lot of camera traps in the Hono O Na Pali Natural Area Reserve on Kauai to pasively detect animals. We do not have the staff knowledge and capacity to build our own camera traps like this.

See full post
discussion

Tracking Individual Whales in 360-degree Videos

Hi WILDLABS Community,My colleagues and I are currently using Insta360 cameras to gather underwater video footage of beluga whales in the Churchill River estuary (Canada). While...

5 1

Hey Courtney! I've just sent you an email about coming on Variety Hour to talk about your work :) Looking forward to hearing from you!

Have you tried using Insta360's DeepTrack feature on your Studio desktop app?  We have used it for similar use cases and it worked well. I would love to hear if it works for your science objectives. We are also experimenting and would love to know your thoughts. :) https://youtu.be/z8WKtdPCL_E?t=123

Hi @CourtneyShuert 

 

We support NOAA with AI for individual ID for belugas (but from aerial and from lateral surface too). If some of our techniques can be cross-applied please reach out: jason at wildme dot org

 

Cheers,

Jason

See full post
discussion

Free online tool to analyze wildlife images

Hey everyone!We made a free online tool to find animals on single images.Link is here: https://www.animaldetect.com/wildlife-detectorIt works very simple: drop an image -> get...

1 2

Hello Eugene, I just tried your service:


Was wondering how possible will it be to have the option to upload a second image and have a comparison running to let the user know if body patterns are: 'same' or 'different', helping to identify individuals. 

Thanks and kind regards from Colombia,
Alejo

See full post
discussion

🐸 WILDLABS Awards 2025: Open-Source Solutions for Amphibian Monitoring: Adapting Autonomous Recording Devices (ARDs) and AI-Based Detection in Patagonia

We’re excited to launch our WILDLABS-funded project to adapt open-source recording hardware and AI tools to help monitor amphibians, with an initial focus on one of South America'...

3 7

Project Update — Sensors, Sounds, and DIY Solutions (sensores, sonidos y soluciones caseras)



We continue making progress on our bioacoustics project focused on the conservation of Patagonian amphibians, thanks to the support of WILDLABS. Here are some of the areas we’ve been working on in recent months:
(Seguimos avanzando en nuestro proyecto de bioacústica aplicada a la conservación de anfibios patagónicos, gracias al apoyo de WildLabs.Queremos compartir algunos de los frentes en los que estuvimos trabajando estos meses)

1. Hardware

One of our main goals was to explore how to improve AudioMoth recorders to capture not only sound but also key environmental variables for amphibian monitoring. We tested an implementation of the I2C protocol using bit banging via the GPIO pins, allowing us to connect external sensors. The modified firmware is already available in our repository:

👉 https://gitlab.com/emiliobascary/audiomoth

We are still working on managing power consumption and integrating specific sensors, but the initial tests have been promising.

(Uno de nuestros principales objetivos fue explorar cómo mejorar las grabadoras AudioMoth para que registren no sólo sonido, sino también variables ambientales clave para el monitoreo de anfibios. Probamos una implementación del protocolo I2C mediante bit banging en los pines GPIO, lo que permite conectar sensores externos. La modificación del firmware ya está disponible en nuestro repositorio:
https://gitlab.com/emiliobascary/audiomoth
Aún estamos trabajando en la gestión del consumo energético y en incorporar sensores específicos, pero los primeros ensayos son alentadores.)



2. Software (AI)

We explored different strategies for automatically detecting vocalizations in complex acoustic landscapes.

BirdNET is by far the most widely used, but we noted that it’s implemented in TensorFlow — a library that is becoming somewhat outdated.

This gave us the opportunity to reimplement it in PyTorch (currently the most widely used and actively maintained deep learning library) and begin pretraining a new model using AnuraSet and our own data. Given the rapid evolution of neural network architectures, we also took the chance to experiment with Transformers — specifically, Whisper and DeltaNet.

Our code and progress will be shared soon on GitHub.

(Exploramos diferentes estrategias para la detección automática de vocalizaciones en paisajes acústicos complejos. La más utilizada por lejos es BirdNet, aunque notamos que está implementado en TensorFlow, una libreria de que está quedando al margen. Aprovechamos la oportunidad para reimplementarla en PyTorch (la librería de deep learning con mayor mantenimiento y más popular hoy en día) y realizar un nuevo pre-entrenamiento basado en AnuraSet y nuestros propios datos. Dado la rápida evolución de las arquitecturas de redes neuronales disponibles, tomamos la oportunidad para implementar y experimentar con Transformers. Más específicamente Whisper y DeltaNet. Nuestro código y avances irán siendo compartidos en GitHub.)

3. Miscellaneous

Alongside hardware and software, we’ve been refining our workflow.

We found interesting points of alignment with the “Safe and Sound: a standard for bioacoustic data” initiative (still in progress), which offers clear guidelines for folder structure and data handling in bioacoustics. This is helping us design protocols that ensure organization, traceability, and future reuse of our data.

We also discussed annotation criteria with @jsulloa to ensure consistent and replicable labeling that supports the training of automatic models.

We're excited to continue sharing experiences with the Latin America Community— we know we share many of the same challenges, but also great potential to apply these technologies to conservation in our region.

(Además del trabajo en hardware y software, estamos afinando nuestro flujo de trabajo. Encontramos puntos de articulación muy interesantes con la iniciativa “Safe and Sound: a standard for bioacoustic data” (todavía en progreso), que ofrece lineamientos claros sobre la estructura de carpetas y el manejo de datos bioacústicos. Esto nos está ayudando a diseñar protocolos que garanticen orden, trazabilidad y reutilización futura de la información. También discutimos criterios de etiquetado con @jsulloa, para lograr anotaciones consistentes y replicables que faciliten el entrenamiento de modelos automáticos. Estamos entusiasmados por seguir compartiendo experiencias con Latin America Community , con quienes sabemos que compartimos muchos desafíos, pero también un enorme potencial para aplicar estas tecnologías a la conservación en nuestra región.)

Love this team <3
We would love to connect with teams also working on the whole AI pipeline- pretraining, finetuning and deployment! Training of the models is in progress, and we know lots can be learned from your experiences!

Also, we are approaching the UI design and development from the software-on-demand philosophy. Why depend on third-party software, having to learn, adapt and comply to their UX and ecosystem? Thanks to agentic AI in our IDEs we can quickly and reliably iterate towards tools designed to satisfy our own specific needs and practices, putting the researcher first. 

Your ideas, thoughts or critiques are very much welcome!

Kudos for such an innovative approach—integrating additional sensors with acoustic recorders is a brilliant step forward! I'm especially interested in how you tackle energy autonomy, which I personally see as the main limitation of Audiomoths  

Looking forward to seeing how your system evolves!

See full post
discussion

What metadata is used from trail camera images?

So, this week I have started looking into adding new and more fine-grained details and methods to the result page og Animal Detect. Including the CameraTrapDP format (coming soon...

2 1

Hi Hugo, it's great that you are thinking about adding metadata features to Animal Detect! I'll share what I think would be useful from my perspective, but I think there is a lot of variation in how folks handle their image organization. 

Time and date are probably the most important features. I rename my image files using the camtrapR package, which uses Exiftool to read file metadata and append date and time to the filename. I find this method to be very robust because of the ability to change datetimes if needed -- for example, if the camera was programmed incorrectly, you can apply a timeshift to ensure they are correct in the filenames. Are you considering adding Exif capability directly to Animal Detect? Otherwise, I think that having a tool to parse filenames would be very helpful, where users could specify which parts of the filename correspond to camera site, date, time, etc., so that this information is included in downstream processing tasks.

I have found it frustrating that information such as camera name and temperature are not included in file metadata by many camera manufacturers. I have used OCR to extract the information in these cases, but it requires a bit of manual review, and I wouldn't say this is a regular part of my workflow.

Camera brand and model can be useful for analysis, and image dimensions and burst sequence can be helpful for computer vision tasks. 

Hope this helps!

Cara

Thank you for your reply! 

It surely helps, we have use exif for a while to read metadata from images, when there is information available. Could be nice to maybe see if we could “write” some of the data into the metadata of the image, instead of just reading. Really good idea with the filename changes and structure. I will add it to a list of possible improvements and see if/how we could implement it. 

Again, thanks for the feedback 😊 

See full post
discussion

How do you tackle the anomalous data during the COVID period when doing analysis?

COVID, as devastating as it for humans, significantly reduced anthropogenic pressures in all ecological systems since they were confined to their homes. My question is as the...

3 0

To clarify, are you talking about a model that carries out automated detection of vocalizations? or a model that detects specific patterns of behavior/movement? I would suspect that the former is not something that may be impacted while training as the fundamental vocalizations/input is not going to change drastically (although see Derryberry et al., where they show variation in spectral characteristics of sparrows at short distance pre and post-covid lockdowns). 

I'm specifically referring to movement of animals affected by anthropogenic factors. My question has nothing to do with vocalisations. 

Humans were essentially removed from large sections of the world during covid and that surely had some effects on wildlife movements, or at least I am assuming it did. But that would not be the regular "trend". If I try to predict the movement of a species over an area frequented by humans, that surely comes into the picture - and so does their absence. 

My question is very specific to dealing with data that has absence (or limited interference) of humans during the covid period in all habitats.

You could just throw out that data, but I think you'd be doing yourself a disservice and missing out on some interesting insights. Are you training the AI with just pre-COVID animal movement data or are you including context on anthropogenic factors as well? Not sure if you are looking at an area that has available visitor/human population data, but if you include that with animal movement data across all years it should net out in the end.

See full post
discussion

What questions would you ask an AI agent for conservation tech?

If you had access to an agent trained specifically to provide guidance on conservation technology tools + methods, what would you ask it? It sounds like a lot of folks are...

5 2

I think a conservation tech agent would be most useful if it connects directly to existing WILDLABS resources, rather than trying to replace tools.

Ideally, it could Link questions to projects from the WILDLABS Awards or The Inventory, as well Suggest relevant forum discussions Recommend community members with similar experience. An then at the end propose technical solutions like ML models, devices, or toolkits for specific tasks 

I'm thinking of doing something like...

prompt

"How can I detect when an insect is attacking a tree using dendrometer data?"

agent

This is a time-series event detection problem. Based on previous project in WildLabs a BiLSTM neural network can be trained to recognise attack patterns by analysing the sequence of stem diameter changes before, during, and after an attack. These patterns may include sudden shrinkage or irregular oscillations caused by stress or resin production.

The agent could then link to:

  • Projects using sensors or time-series AI in behaviours or events monitoring
  • Open-source tools for LSTM-based classification
  • Forum threads or community contacts who’ve worked on similar topics, models, tools, etc

 

I agree with Jorge that an AI agent could be useful to help search the vast repository of existing discussions on WildLabs. For example: in this thread below, Maristela could have asked her question to the agent and (hopefully) been directed to the link that Akiba mentioned. 

https://wildlabs.net/discussion/advise-needed-close-focus-camera-traps

However, one issue I can see with using an AI agent instead of asking a question as currently, is if there is no record of the questions asked to the agent. A great strengths of WildLabs (and online forums) is that, because questions are recorded and visible, other members can learn from others' questions. If questions start being "hidden" in the agent's memory, other members can hardly learn from them.

Would love to collaborate on this we are curently building agents for conservation 
Kind regards
Olivier 

See full post
Link

Unlocking AI for Nonprofits: Enroll in Nethope's New AI Skills Course for Nonprofits

A free, self-paced course series for nonprofit professionals is available through August 31.

1
discussion

A technical and conceptual curiosity... Could generative AI help us simulate or decode animal communication?

Hi everyone,I recently watched a talk by Aza Raskin where he discusses the idea of using generative models to explore communication with whales. While the conversation wasn’t...

4 1

Hi Jorge, 

 

I think you'll find this research interesting: https://blog.google/technology/ai/dolphingemma/

Google's researchers did exactly that. They trained an LLM on dolphin vocalizations to produce continuation output, exactly as in the autoregressive papers you've mentioned, VALL-E or WaveNet.

I think they plan to test it in the field this summer and see if it will produce any interesting interaction.

Looking forward to see what they'll find :) 

Besides, two more cool organizations working in the field of language understanding of animals using AI:

https://www.projectceti.org/

https://www.earthspecies.org/

This is a really fascinating concept. I’ve been thinking about similar overlaps between AI and animal communication, especially for conservation applications. Definitely interested in seeing where this kind of work goes.

This is such a compelling direction, especially the idea of linking unsupervised vocalisation clustering to generative models for controlled playback. I haven’t seen much done with SpecGAN or AudioLDM in this space yet, but the potential is huge. Definitely curious how the field might adopt this for species beyond whales. Following this thread closely!

See full post
discussion

Jupyter Notebook: Aquatic Computer Vision

Dive Into Underwater Computer Vision Exploration OceanLabs Seychelles is excited to share a Jupyter notebook tailored for those intrigued by the...

4 1

This definitely seems like the community to do it. I was looking at the thread about wolf detection and it seems like people here are no strangers to image classification. A little overwhelming to be quite honest 😂

While it would be incredible to have a powerful model that was capable of auto-classifying everything right away and storing all the detected creatures & correlated sensor data straight into a database - I wonder if in remote cases where power (and therefore cpu bandwidth), data storage, and network connectivity is at a premium if it would be more valuable to just be able to highlight moments of interest for lab analysis later? OR if you do you have cellular connection, you could download just those moments of interest and not hours and hours of footage? 

Am working on similar AI challenge at the moment. Hoping to translate my workflow to wolves in future if needed. 

We all are little overstretched but it there is no pressing deadlines, it should be possible to explore building efficient model for object detection and looking at suitable hardware for running these model on the edge. 

 

 

Wow this is amazing! This is how we integrate Biology and Information Technology. 

See full post
article

Leveraging AI & Big Data For Love Of The Environment

🔥 Excited to join the WILDLAB community! I’m Robert Chonge — a Full Stack Developer with a passion for AI and Big Data, now channeling that tech firepower toward environmental conservation.Let’s turn data into action...

1 2
@RobertChonge , I'm into front-end development, UX, Design, and AI as well. Is there any projects you have going that we could collobrate on?
See full post
discussion

Excited about AI in Conservation

Hi everyone! I am so excited to be a part of this group for two reasons. First, as part of my job, i manage computers and other devices. On my own time, I am a Vertical Market AI...

0
See full post
discussion

Exploring the Wild Edge: A Proposal for a New WILDLABS Group

Over the past year, I’ve found myself returning again and again to one big question: How can we make conservation tech work where the wild really begins , where the signal...

58 17

This sounds like a great idea, this is an area that I want to do more work in,

 

Where can I sign up.

 

 

Hey Stuart,

Thank you for your interest! We're glad you'd like to be part of our journey. We're still in the process of setting up the group, and we'll let you know as soon as we're ready.

Thanks for your understanding! 🤗 

See full post
discussion

Software for tortoise re-identification

I was wondering whether anybody has come across AI-software that would be able to re-identify animals - I am looking for a tool to re-identify giant tortoises.I have a small group...

7 1

We would be happy to explore supporting this in the Internet of Turtles (iot.wildbook.org). We support marine and terrestrial species, and the HotSpotter (SIFT) and MiewID v3 algorithms for re-ID would likely have good zero-shot matching potential.

Hi Jason, 
That sounds exciting. From what I understand, the Internet of Turtles focusses mainly on animals traveling long distances. The tortoises I am interested in re-identifying are limited in their range, as they the island they are on is quite small. Would that nevertheless be of interest for that database? And if so, how would you suggest to proceed? 

Also, I would generally interested to contribute to the Internet of Turtles with sea turtle photos. There are a lot of Green and Hawksbill sea turtles of various ages in the sea surrounding the island where I work, so I have requested an account via the website

See full post
Link

AI Pipeline for digitalisation of labels

Dear colleagues, I'd like to share with you the output of the project KIEBIDS, which focused on using AI to extract biodiversity-relevant information from museum labels. Perhaps it can be applied also to other written materials, more related to conservation? Have a look!

0