Group

Open Source Solutions / Feed

This group is for anyone interested in open-source technologies for ecology and conservation. We welcome contributions from both makers and users, whether active or prospective. Here, we believe in the need for open-source hardware and software to do good science and research. It is a place to share novel or existing technologies, exchange resources, discuss new projects, ask for advice, find collaborators, and advocate for adopting open-source technologies.

discussion

Software QA Topics

Hi everyone,What should we share or demo about Software Quality Assurance? Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to...

1 3

Hi everyone,

What should we share or demo about Software Quality Assurance? 

Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to do a community call to knowledge share on software testing and test automation in the 3rd or 4th week of January.

We've listed a few QA topics that we could talk about in this 1-2 minute poll here basketball stars and would like your feedback on topic priority.

Thanks for your feedback and we look forward to connecting! We'll also post when we have an exact date and time pinned down.

Sounds like a great initiative—looking forward to it! I’d love to hear more about your real-world test automation setup, especially any tools or frameworks you’ve found effective at WPS. It’d also be helpful to see how QA fits into your dev workflow and any challenges you’ve faced specific to conservation tech. I just filled out the poll and can’t wait to see what topics get chosen. Thanks, Alex and team, for organizing this!

See full post
Link

Open Source Agriculture Repository

Repository for all things open-source in agricultural technology (agritech) development by Guy Coleman. This accompanies the OpenSourceAg newsletter. Aiming to collate all open-source datasets and projects in agtech in one place.

0
article

Fires in the Serengeti: Burn Severity & Remote Sensing with Earth Engine

Fires in Serengeti and Masai Mara National Parks have burned massive areas this year. With Google Earth Engine, it's possible to quantify burn severity using the normalized burn ratio function, then calculate the total...

1 0
This was originally presented on 24 April, 2025 as part of a Geospatial Group Cafe. We will post the recording and highlights from other speakers of that session soon!
See full post
discussion

No-code custom AI for camera trap images!

Thanks to our WILDLABS award, we're excited to announce that Zamba Cloud has expanded beyond video to now support camera trap images! This new functionality...

3 3

When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.

We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.

Hope that helps clear up the confusion. Happy to go on and on (and on)...

See full post
discussion

From Field to Funder: How to communicate impact?

Conservation involves a mosaic of actors — field practitioners, local communities, funders, government agencies, scientists, and more.Each one needs different levels of...

2 2

Great questions @LeaOpenForests !

I don't have concrete answers since I am not a stakeholder in any project in particular. Based on experience with research on the potential for a similar one-stop-shop for science metrics, I would suggest that there is no simple solution: different actors do need and have different views on presenting and viewing impact. This means possible gaps between what one group of actors need and what the other is willing or able to produce. One can hope, search and aim for sufficient overlap, but I don't see how they would necessarily or naturally overlap.

Still, I would guess that if there are dimensions of overlap, they are time, space and actor-networks 

I have posted about this in a different group, but I love boosting the impact of my communication through use of visuals. 

Free graphics relating to conservation technology and the environment are available at:

  1. National Environmental Science Program Graphics Library

    Graphics below of a feral cat with a tracking collar and a cat grooming trap are examples of symbols available courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au.

  2. UMCES Integration and Application Network Media Library
Feral cat with tracking collar courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au

Cat grooming trap graphic courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au

See full post
discussion

DeepFaune v.1.3 is out!

Hi, just wanted to let whoever is interested that v.1.3 of DeepFaune is out! Deepfaune is a software that runs on any standard computer and allow you to identify species in...

3
See full post
discussion

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings...

20 8

Hello!

I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?

 

s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')

Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')

full_time = Sxx.shape[1] # number of time samples in spectrogram

j_count = 5 # number of chunks we want to break it into

window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk

ACI_tot = 0

for i in range(j_count):

_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])

ACI_tot = ACI_tot + int(ACI)

This gives ACI_tot = 1516

Hi all, 

I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...

Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.

I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.

Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.

Any advice would be greatly appreciated, thanks!

Kind regards,

Jonathan

 

New stable release : v1.5.1

We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.

In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.



 

See full post
discussion

The boring fund: Standardizing Passive Acoustic Monitoring (PAM) data - Safe & sound

Thanks to the Boring Fund, we are developing a common standard for Passive Acoustic Monitoring (PAM) data.Why it’s important: PAM is rapidly growing, but a core bottleneck is the...

7 13

This is such an important project! I can't wait to hear about the results. 

Hey Sanne, awesome - we definitely need a consistent metadata standard for PAM.

If you haven't already, I would suggest sharing this on the Conservation Bioacoustics Slack channel and the AI for Conservation Slack channel. You would reach a lot of active users of PAM, including some folks who have worked on similar metadata efforts. 

If you're not a member of either one of those, DM me your preferred email address and I'll send you an invite!

Hello everyone,

Thank you all for your contribution!

You can read some updates about this project in this post.

Julia

See full post
article

Nature Tech for Biodiversity Sector Map launched!

Carly Batist and 1 more
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective! 

1 0
Thanks for sharing @carlybatist  and @aliburchard !About the first point, lack of data integration and interpretation will be a bottleneck, if not death blow to the whole...
See full post
discussion

Nature Tech Unconference - Anyone attending?

Hi all, anyone planning to attend the Nature Tech Unconference on 28th March at the London School of Economics Campus in London, UK? (the event is free to attend but...

8 1

The Futures Wild team will be there :)

See full post
discussion

Generative AI for simulating landscapes before and after restoration activities

Hi all.Has anyone come across any generative AI tools that could be trained and used to generate photorealistic landscapes (in a web application) from habitat maps and then re-...

1 0

Yep we are working on it 

 

1/ segment 

2/remote unwanted ecosytem

3/get local potential habitat

4/generate

5/add to picture 

 

See full post
discussion

United Nations Open Source Principles

FYI, I just came across the United Nations Open Source Principles, which was recently adopted by the UN Chief Executive Board’s Digital Technology Network (DTN): It has been...

1 5

All sound, would be nice if there were only 5, though!

See full post
discussion

Presenters wanted: showcase your open source income streams

A burning topic in the open source community is how to generate income ( in any way, including commercialisation ) with open source software or hardware? Recently, questions were...

5 4

It's great to see so much interest in presenting at this webinar. We have also received interest through direct messages and meetings that we've happened to have had. Thank you all for volunteering!

We will get back to you after wext week Wednesday ( 12th ) when there will be the next coordination meeting.

This date will also be the deadline for those still considering to volunteer.

See full post
discussion

What are open source solutions anyway?

What “counts” as open source? And why is that important to biodiversity conservation?It's great to be one of the three conveners of the WILDLABS Open Source...

33 12

Open source technologies are a game-changer for biodiversity conservation. They give us the freedom to use, study, modify, and share vital tools and knowledge that help advance research in meaningful ways. For conservationists, this means we can adapt technologies to meet local needs, improve existing tools, and make new innovations available to everyone—creating a more collaborative and sustainable future for our planet.

It’s exciting to see the impact of open source in conservation already, with tools like Mothbox, Fieldkit, and OpenCTD helping to drive progress. I'm curious—how do the formal definitions of open source resonate with you? How do they shape the way we approach conservation?

Also, if you're interested in how open source AI can support conservation efforts, check out this article: Open Source AI Agents: How to Use Them and Best Examples.

Can’t wait to hear your thoughts! Let's keep the conversation going.

Sorry to be a stickler on syntax when there is a richer discussion about community here - but I believe a true "open source" project is a functionally complete reference design that anyone can build upon with no strings attached. If the community isn’t provided with enough information to fully build and iterate on the design independently, then the project doesn’t truly meet the spirit of open source. 

As a developer and engineer, I’ve observed that sometimes projects crowdsource free engineering work under the guise of being "open source." While this can have benefits, it can feel like asking for a free lunch from clients and customers. 

Advanced features—like enterprise-level data management or tools for large-scale deployments—can reasonably remain proprietary to sustain the project financially. Transparency is critical here. If the foundational components aren’t fully open, it would be more accurate to describe the project as "community-driven" or "partially open." And as an engineer/developer I wouldn't be angry when I went to explore the project marked "open source" only to find that I have been lied to.

Just my two cents, and I really appreciate the thoughtful discussion here. The open source community has been a massive influence on me. Everything I do at work would not be possible without it.  In many ways, "open source" or "public domain" projects represents the true know-how of our society.

See full post
discussion

Definitions for open source software & hardware and why they're important

Recent conversations (including this previous thread) have reminded me that while I've been involved in various open source tech communities for years, I sometimes implicitly –...

2 0

Thanks for this excellent and thought-provoking post, Pen. I agree this is a binary yes/no issue, but there is a spectrum. There could also be philosophical nuances. For example, does excluding honey from a vegan diet meet the ethical criteria of veganism? It's an animal product, so yes, but beekeeping generally doesn't have the same exploitative potential as cow, sheep, or pig husbandry, right? However, looking strictly at the definition, honey is out if you want to be vegan. 

Back to software! Isn’t the main issue that companies falsely claim to offer open source hardware/software? To avoid this, do you then have to create an accreditation system? Who polices it? Is it fair? Would users care that their software has the accredited open source stamp of approval? Ultimately, we need definitions to define boundaries and speak a common language.

Thanks @VAR1 great insights! Funny you mentioned the honey thing, @hikinghack said the same in response on the GOSH forum

I think the point I'm trying to make with the vegan comparison is that while it might not be 100%, it is close enough for us to have productive conversations about it without running in circles because we can't even agree on what we are talking about. 

As for open source tech, there actually is accreditation for open source hardware (at least of a sort). The Open Source Hardware Association has a fairly mature certificate program: 

I am genuinely undecided whether such a formal accreditation system is required for open source software. My undecided-ness comes back to the food/agriculture analogy, where a similar issue exists for organic certification. Being certified organic could possibly, in some cases, be beneficial. However, certification can also be very onerous for small organic farmers who can't afford to get it. 

But before we even think about accreditation, I echo your last sentence that we need definitions to define boundaries. These definitions, as I argue in my original post above, is not only about principles and philosophy, they are also a practical necessity for enabling effective communication! 

See full post
discussion

GPS Tracker For Wildlife

Hello everyone! I'm Akio, and I'm new to this group.I'd love to start a discussion about GPS trackers for wildlife. As the developer of Loko—an open-source, offline GPS tracker—I’...

5 1

Thank you for this valuable information!

Some of the features you mentioned can be quickly added to Loko, while others require more consideration. Loko’s communication is one-way—meaning the transmitter doesn’t know whether the receiver has received the data. This design choice is made to conserve battery life. However, all data is logged internally and can be accessed via USB.

I will add GeoTIFF loading to the Loko App. Currently, Loko is not suitable for wildlife tracking because it is not waterproof, but I am working on improving its mechanical design.

Loko already supports multiple connections, allowing many transmitters to connect to a single receiver or/and multiple receivers.

Regarding encryption: What do you mean by "Encryption should not be optional"?
Are you suggesting that communication should always be encrypted by default? on loko I made it user-configurable because encrypted data packets are 32 bytes, whereas unencrypted ones are 18 bytes. A smaller data packet improves reception sensitivity and extends the transmission range.

In your opinion, what would be a reasonable price for such a device? This is very important when adding new hardware features.

Cheers, 

Akio)

Hi Herhanu , appreciate for your valuable feedbacks.

  1. can you explain what type of release mechanisim do you mean ,   picture will be much helpfull .  do you mean with a remote release mechanisim activates and release the tracker from crocodile collar?
  2. how far data need to be sent ?    with a mesh network of Loko receivers wide range of area can be covered i guess.
  3. long distance transmission is very challanging  when transmitter is very close the ground , on crocodiles especially. 

for what purpose do would you use accelerometer data? is there any specific use case?

Cheers, )

Sure, Akio! Happy to answer!
1. Yes, something like that. The few existing i guess applied already for GPS collar (literally collar) that usually for big cats and some other big mammals. There is also GPS tag for Cetaceans that can pop up, but its only remained with the animals several days CMIIW (eg see links below)
link 1

link 2

link 3

2. I guess it depends on your research questions or project objectives. But for crocodile they can have vast home range from 100 ha to 10,000 ha (depends on species). For my species, it at least uses 500 ha of area, and the farthest between points can be 15km apart.

3. accelerometer, especially in crocodile can give insight about their movement ability. As they can random as they can be - or being a statue for hours (like when you look at crocs in the zoo). Of course this would depends on your objectives.

Hope this helps!

Cheers~ 

See full post
discussion

Camera Trap Data Visualization Open Question

Hi there,I would like to get some feedback, insight into how practitioners manage and visualize their camera trap data.We realized that there exists already many web based...

6 0

Hey Ed! 

Great to see you here and thanks a lot for your thorough answer.
We will be checking out Trapper for sure - cc @Jeremy_ ! A standardized data exchange format like Camtrap DP makes a lot of sense and we have it in mind to build the first prototypes.
 

Our main requirements are the following:

  • Integrate with the camtrap ecosystem (via standardized data formats)
  • Make it easy to run for non technical users (most likely an Electron application that can work cross OSes).
  • Make it useful to explore camtrap data and generate reports

 

In the first prototyping stage, it is useful for us to keep it lean while keeping in mind the interface (data exchange format) so that we can move fast.


Regards,
Arthur

Quick question on this topic to take advantage of those that know a lot about it already. So once you have extracted all your camera data and are going through the AI object detection phase which identifies the animal types. What file formation that contains all of the time + location + labels in the photos data do the most people consider the most useful ? I'm imagining that it's some format that is used by the most expressive visualization software around I suppose. Is this correct ?

A quick look at the trapper format suggested to me that it's meta data from the camera traps and thus perform the AI matching phase. But it was a quick look, maybe it's something else ? Is the trapper format also for holding the labelled results ? (I might actually the asking the same question as the person that started this thread but in different words).

Another question. Right now pretty  much all camera traps trigger on either PIR sensors or small AI models. Small AI models would tend to have a limitation that they would only accurately detect animal types and recognise them at close distances where the animal is very large and I have question marks as to whether small models even in these circumstances are not going to make a lot of classification errors (I expect that they do and they are simply sorted out back at the office so to speak). PIR sensors would typically only see animals within say 6m - 10m distance. Maybe an elephant could be detected a bit further. Small animals only even closer.

But what about when camera traps can reliably see and recognise objects across a whole field, perhaps hundreds of meters?

Then in principle you don't have to deploy as many traps for a start. But I would expect you would need a different approach to how you want to report this and then visualize it as the co-ordinates of the trap itself is not going to give you much  information. We would be in a situation to potentially have much more accurate and rich biodiversity information.

Maybe it's even possible to determine to a greater degree of accuracy where several different animals from the same camera trap image are spatially located, by knowing the 3D layout of what the camera can see and the location and size of the animal.

I expect that current camera trap data formats may fall short of being able to express that information in a sufficiently useful way, considering the in principle more information available and it could be multiple co-ordinates per species for each image that needs to be registered.

I'm likely going to be confronted with this soon as the systems I build use state of the art large number of parameter models that can see species types over much greater distances. I showed in a recent discussion here, detection of a polar bear at a distance between 130-150m.

Right now I would say it's an unknown as to how much more information about species we will be able to gather with this approach as the images were not being triggered in this manner till now. Maybe it's far greater than we would expect ? We have no idea right now.

See full post