Group

Latin America Community / Feed

This group aims to establish a regional hub in Latin America that provides a space for knowledge sharing and network-building among the growing number of people working in conservation technology in the region.  Este grupo tiene como objetivo establecer un centro regional en América Latina que proporcione un espacio para el intercambio de conocimientos y la creación de redes entre el creciente número de personas que trabajan en tecnología para la conservación en la región.

discussion

🐸 WILDLABS Awards 2025: Open-Source Solutions for Amphibian Monitoring: Adapting Autonomous Recording Devices (ARDs) and AI-Based Detection in Patagonia

We’re excited to launch our WILDLABS-funded project to adapt open-source recording hardware and AI tools to help monitor amphibians, with an initial focus on one of South America'...

3 7

Project Update — Sensors, Sounds, and DIY Solutions (sensores, sonidos y soluciones caseras)



We continue making progress on our bioacoustics project focused on the conservation of Patagonian amphibians, thanks to the support of WILDLABS. Here are some of the areas we’ve been working on in recent months:
(Seguimos avanzando en nuestro proyecto de bioacústica aplicada a la conservación de anfibios patagónicos, gracias al apoyo de WildLabs.Queremos compartir algunos de los frentes en los que estuvimos trabajando estos meses)

1. Hardware

One of our main goals was to explore how to improve AudioMoth recorders to capture not only sound but also key environmental variables for amphibian monitoring. We tested an implementation of the I2C protocol using bit banging via the GPIO pins, allowing us to connect external sensors. The modified firmware is already available in our repository:

👉 https://gitlab.com/emiliobascary/audiomoth

We are still working on managing power consumption and integrating specific sensors, but the initial tests have been promising.

(Uno de nuestros principales objetivos fue explorar cómo mejorar las grabadoras AudioMoth para que registren no sólo sonido, sino también variables ambientales clave para el monitoreo de anfibios. Probamos una implementación del protocolo I2C mediante bit banging en los pines GPIO, lo que permite conectar sensores externos. La modificación del firmware ya está disponible en nuestro repositorio:
https://gitlab.com/emiliobascary/audiomoth
Aún estamos trabajando en la gestión del consumo energético y en incorporar sensores específicos, pero los primeros ensayos son alentadores.)



2. Software (AI)

We explored different strategies for automatically detecting vocalizations in complex acoustic landscapes.

BirdNET is by far the most widely used, but we noted that it’s implemented in TensorFlow — a library that is becoming somewhat outdated.

This gave us the opportunity to reimplement it in PyTorch (currently the most widely used and actively maintained deep learning library) and begin pretraining a new model using AnuraSet and our own data. Given the rapid evolution of neural network architectures, we also took the chance to experiment with Transformers — specifically, Whisper and DeltaNet.

Our code and progress will be shared soon on GitHub.

(Exploramos diferentes estrategias para la detección automática de vocalizaciones en paisajes acústicos complejos. La más utilizada por lejos es BirdNet, aunque notamos que está implementado en TensorFlow, una libreria de que está quedando al margen. Aprovechamos la oportunidad para reimplementarla en PyTorch (la librería de deep learning con mayor mantenimiento y más popular hoy en día) y realizar un nuevo pre-entrenamiento basado en AnuraSet y nuestros propios datos. Dado la rápida evolución de las arquitecturas de redes neuronales disponibles, tomamos la oportunidad para implementar y experimentar con Transformers. Más específicamente Whisper y DeltaNet. Nuestro código y avances irán siendo compartidos en GitHub.)

3. Miscellaneous

Alongside hardware and software, we’ve been refining our workflow.

We found interesting points of alignment with the “Safe and Sound: a standard for bioacoustic data” initiative (still in progress), which offers clear guidelines for folder structure and data handling in bioacoustics. This is helping us design protocols that ensure organization, traceability, and future reuse of our data.

We also discussed annotation criteria with @jsulloa to ensure consistent and replicable labeling that supports the training of automatic models.

We're excited to continue sharing experiences with the Latin America Community— we know we share many of the same challenges, but also great potential to apply these technologies to conservation in our region.

(Además del trabajo en hardware y software, estamos afinando nuestro flujo de trabajo. Encontramos puntos de articulación muy interesantes con la iniciativa “Safe and Sound: a standard for bioacoustic data” (todavía en progreso), que ofrece lineamientos claros sobre la estructura de carpetas y el manejo de datos bioacústicos. Esto nos está ayudando a diseñar protocolos que garanticen orden, trazabilidad y reutilización futura de la información. También discutimos criterios de etiquetado con @jsulloa, para lograr anotaciones consistentes y replicables que faciliten el entrenamiento de modelos automáticos. Estamos entusiasmados por seguir compartiendo experiencias con Latin America Community , con quienes sabemos que compartimos muchos desafíos, pero también un enorme potencial para aplicar estas tecnologías a la conservación en nuestra región.)

Love this team <3
We would love to connect with teams also working on the whole AI pipeline- pretraining, finetuning and deployment! Training of the models is in progress, and we know lots can be learned from your experiences!

Also, we are approaching the UI design and development from the software-on-demand philosophy. Why depend on third-party software, having to learn, adapt and comply to their UX and ecosystem? Thanks to agentic AI in our IDEs we can quickly and reliably iterate towards tools designed to satisfy our own specific needs and practices, putting the researcher first. 

Your ideas, thoughts or critiques are very much welcome!

Kudos for such an innovative approach—integrating additional sensors with acoustic recorders is a brilliant step forward! I'm especially interested in how you tackle energy autonomy, which I personally see as the main limitation of Audiomoths  

Looking forward to seeing how your system evolves!

See full post
discussion

WILDLABS AWARDS 2024 - Innovative Sensor Technologies for Sustainable Coexistence: Advancing Crocodilian Conservation and Ecosystem Monitoring in Costa Rica

Hi everyone, it’s time I introduce our project titled “Innovative Sensor Technologies for Sustainable Coexistence: Advancing Crocodilian Conservation and Ecosystem Monitoring...

5 6

Super interesting! I'm currently developing sensor accelerometers for fence perimeters in wildlife conservation centres. I think this is a really cool application of accelerometers; I would love to know how the sensor which you developed for part 3 looked like, or what type of software/machine learning methods you've used? Currently my design is a cased raspberry pi pico, combined with an accelerometer and ml decision trees in order to create a low-cost design. Perhaps there is something to be learnt from this project as well :)

See full post
discussion

Counting aggregated animals in orthomosaics?

Hello everyone!Just wanted to share two papers that we published recently on approaches to deal with counting errors when surveying wildlife populations using drone-derived...

1 1

Thank you for sharing. Would love to learn bit more about the data workflow. 

Last year I tired to using QGIS and few existing models to count the birds from orthomosaics of wadding birds in Cambodia but gave after dismal results. 

See full post
discussion

Issue with SongBeam recorder

Hello everyone, I am currently working on a project to measure the impact of industrial noise on the biodiversity of a natural reserve in Veracruz, Mexico. I have been...

1 0

Hi Josept! Thank you for sharing your experience! This types of feedback are important for the community to know about when choosing what tech to use for their work. Would you be interested in sharing a review of Songbeam and the Audiomoth on The Inventory, our wiki-style database of conservation tech tools, R&D projects, and organizations? You can learn more here about how to leave reviews!

See full post
discussion

PACE LAND DATA AND USER GROUP

The Plankton Aerosol Cloud Ocean Ecosystem Observatory (PACE) has freely available hyperspectral land surface data available on Earth Engine. Add the repo in Earth Engine here and...

2 1

The  Jupyter notebook on projecting and formatting PACE OCI data is also available here!

Very interesting! Thanks for posting this. I found the NASA ARSET Tutorial quite useful for an overview on PACE before delving into the data. Highly recommend it if you're new to hyperspectral and the PACE mission!

See full post
Link

Remote Sensing of Tropical Dry Forests in the Americas

Provides a comprehensive overview of new studies, giving insights into the most endangered ecosystem in the tropics. The book concentrates on four thematic areas such as LiDAR remote sensing, remote sensing and ecology, quantification of ecosystem services, and ecology.

1
discussion

Smart Drone to Tag Whales Project

Hi all,Let me please introduce our project named Smart Drone to Tag Whales, awarded in WILDLABS AWARDS 2024.Our research team (@machadoams, @anakfleck, @...

1 2

I would love to hear updates on this if you have a mailing list or list of intersted parties!

See full post
discussion

'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes

Thanks to WILDLABS 'Boring Fund' support, we are hosting a workshop on AI for biodiversity monitoring in Medellin, Colombia, April 21st to 24th. This is a followup discussion to...

4 14

Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.  

Hi everyone, following up here with a summary of our workshop!

The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador.  Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.

Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.

The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.

We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.  

Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy.  All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching. 

Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.

Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.

  • The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
  • Keep the objectives, number of packages, and size of sample datasets to a minimum.
  • Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.

I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.    

See full post