Software plays an increasingly vital role in conservation, helping to protect and manage biodiversity through innovative technological solutions. It also facilitates collaboration among researchers, local communities, and governments, empowering them to develop sustainable conservation strategies. However, many of us working in conservation tech don't have the benefit of a large team of software experts to collaborate with or bounce ideas off of. We aim to change that.
This group is for anyone interested in applying software to conservation and wildlife research. Whether you're a developer eager to contribute to conservation or a newbie with valuable data and ideas but limited software experience, this group connects people with diverse expertise. It provides a space for asking questions, sharing resources, and staying informed about new technologies and best practices. We are also committed to supporting technologists and conservationists from the Global South, ensuring that everyone has access to the tools, knowledge, and opportunities to contribute meaningfully.
Our goal is to foster collaboration and avoid "reinventing the wheel" by sharing solutions, whether it's an application, design approach, or a simple script. We also aim to lower barriers to entry by offering mentorship and guidance and providing feedback on technical ideas. This supportive community is a place to learn, connect, and contribute to the advancement of conservation through software. Whether you're looking for software and mobile app developers to help you with your conservation tech needs, have questions about development, are looking for resources, or would like to share your own app, software, or gaming tools, this is the group for you!
Resources
Header photo: Trevor Hebert
Group curators
Wildlife Protection Solutions (WPS)
Software Engineer in Conservation Tech



- 2 Resources
- 15 Discussions
- 10 Groups
Wildlife Protection Solutions (WPS)
Director of Technology at Wildlife Protection Solutions. Primarily focuses on leveraging machine learning and advanced data analytics to combat poaching, monitor biodiversity, and predict environmental threats.
- 0 Resources
- 1 Discussions
- 5 Groups
- @KB
- | she/her
Wildlife ecologist specializing in animal movement modeling and habitat selection with a strong interest in conservation policy and management decisions.
- 0 Resources
- 0 Discussions
- 17 Groups
- @trish_lai
- | she/her
UCD SVM Student
- 0 Resources
- 0 Discussions
- 11 Groups
- @dmwilliams
- | she/her
Behavioral ecologist and community science coordinator
- 0 Resources
- 0 Discussions
- 8 Groups
- @tkswanson
- | she/her
San Diego Zoo Wildlife Alliance
Research Coordinator II for the Conservation Technology Lab at SDZWA

- 2 Resources
- 2 Discussions
- 7 Groups
- @Frank_van_der_Most
- | He, him
RubberBootsData
Field data app developer, with an interest in funding and finance





- 54 Resources
- 177 Discussions
- 9 Groups
- @TaliaSpeaker
- | She/her
WILDLABS & World Wide Fund for Nature/ World Wildlife Fund (WWF)
I'm the WILDLABS Research Specialist at WWF-US



- 23 Resources
- 62 Discussions
- 25 Groups
WILDLABS & Wildlife Conservation Society (WCS)
I'm the Bioacoustics Research Analyst at WILDLABS. I'm a marine biologist with particular interest in the acoustics behavior of cetaceans. I'm also a backend web developer, hoping to use technology to improve wildlife conservation efforts.





- 27 Resources
- 34 Discussions
- 34 Groups
- @Adrien_Pajot
- | He/His
WILDLABS & Fauna & Flora
Hi! I am Adrien, a dedicated French ornithologist and engineer committed to biodiversity conservation. I joined the WILDLABS team as a project manager in October 2023!





- 31 Resources
- 132 Discussions
- 3 Groups
Sustainability Manager for CERES Tag LTD. An animal health company; animal monitoring, conservation, & anti-poaching/ rural crime. Wildlife, livestock, equine & companion. #CeresTrace #CeresWild #CeresRanch





- 2 Resources
- 20 Discussions
- 24 Groups
- @carlybatist
- | she/her
ecoacoustics, biodiversity monitoring, nature tech



- 111 Resources
- 356 Discussions
- 19 Groups
- @himalyanibex
- | He/him
Tech Entrepreneur, ex-Microsoft, ex-Google, believe in technology for greater good, insatiable curiosity towards seeking knowledge & understanding covering disciplines like game theory, mechanism design, economics, ecology, agent-based modeling etc. Hands on with IoT, AI/ML etc.
- 0 Resources
- 2 Discussions
- 5 Groups
Movement ecologist using conservation technology to study the behaviors of animals in the wild and understand how they cope with change to most effectively address conservation- and conflict-related issues.


- 0 Resources
- 9 Discussions
- 11 Groups
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective!
1 April 2025
Modern GIS is open technology, scalable, and interoperable. How do you implement it? [Header image: kepler.gl]
12 March 2025
The Conservation Technology Laboratory within the Population Sustainability department is seeking two fellows for summer 2025
5 February 2025
OPeNDAP is looking for a Senior Software Developer who is passionate about working on open source solutions for scientific data access.
5 February 2025
We look for a person with programming skills in R and/or Python.
4 February 2025
Key Conservation is seeking an experienced Software Developer to join our team.
28 January 2025
Article
SPARROW: Solar-Powered Acoustics and Remote Recording Observation Watch
18 December 2024
Develop state-of-the-art interactive ML-based tools for biodiverstiy conservation and related applications
11 December 2024
May be of interest to WILDLABS in terms of open source solutions, conservation tech, and learning about similar efforts. The founder of Open Sustainable Technology wrote this post about their work: https://...
2 December 2024
June 2025
event
November 2025
May 2025
event
March 2025
event
46 Products
Recently updated products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Hi Barbara, You are right in that cyberpoaching is becoming an increasing risk. I have seen various cases where information posted on social media has provided too much... |
|
Software Development | 9 years 4 months ago | |
Can games raise awareness of conservation issues like the illegal wildlife trade? United for Wildlife's Peter Jacobs has worked with... |
|
Software Development | 9 years 5 months ago |
AniMove Summer School 2025
21 May 2025 1:08pm
Software QA Topics
9 January 2025 12:00pm
Prototype for exploring camera trap data
20 January 2025 10:23pm
6 May 2025 4:53pm
- Regarding species richness, isn't it covered by the activity tab where we can see the entire list of detected species? What else do you think would be helpful? I can imagine a better UI focusing on species with more information for each species and easier search but the raw info would be roughly the same.
I think the app sort of covers this, using the pie chart layover on leaflet in the activity tab. However, it would be nice to have a more direct way of visualizing the species richness (i.e. scale the radii of the circle markers with the number of species detected). In addition to this you may want to think about visualizing simple diversity indices (there's alpha diversity which captures the species richness at each trap, gamma diversity to summarize the total richness in the area and beta diversity to assess how different species compositions are between traps). Note: I do not use diversity indices often enough to provide more specific guidance. @ollie_wearn is this what you were referring to?
- Regarding occupancy, that makes sense to me. Only challenge is I'm not using python or R at the moment because and we use javascript for easier user interfaces with web technologies. It's still possible to add python support but I'd delay as much as possible to keep things simple. Anyway, it's a technical challenge that I'm pretty sure I can solve.
I can help you with setting up the models in R, Python or Stan/ C++. But I have no idea on how to overcome the technical challenge that you are referring to. I agree with @ollie_wearn that allowing variable selection and model building would take this to far. One thing I would suggest is to allow users to fit a spatially-explicit version of the basic occupancy model (mind that these can be slow). This type of model leverages the correlations in species detections between trap locations to estimate differences in occupancy across the study area rather than just estimating a single occupancy for the entire area.
- What about relative abundance? My readings always mention occupancy, species richness, abundance and density. I've read and ask enough about density to know that it's hard for unmarked animals and might not be necessary. If I understood correctly, relative abundance can give similar insights and as a conservationist, you probably want to know the trend of relative abundance over time.
Yes, I would leave users with the following options: a UI-switch that let's them pick either:
- the number of observations
- the number of individuals detected
- the relative abundance index or RAI (based on 1.)
- the RAI (based on 2.)
to visualize on the leaflet map - and barchart on the side in the activity tab.
Regarding density: you could add a tab that calculates density using the Random Encounter Model (REM), which is often used when estimating density of unmarked animals without info on recaptures.
Regarding activity patterns: I would also add a tab were users can visualize either diel or annual activity cycles (often called activity patterns) computed through the activity R-package (or integrate this in an existing tab). And maybe even allow computing overlap in daily activity cycles among selected species.
If you manage to include all of these, then I think your app covers the 90% of use cases.
Some other features worth discussing:
- merging/ comparing multiple projects
- in line with calculating overlap between activity cycles, allow computing a spatial overlap between two species or even a spatio-temporal overlap
7 May 2025 10:49am
@Jeremy_ For the Python implementation of basic occupancy models (as suggested by @ollie_wearn ), please refer to these two projects:
I second @martijnB suggestion to use spatially explicit occupancy models (as implemented in R, e.g., https://doserlab.com/files/spoccupancy-web/). However, this would need to be added to both of the aforementioned Python projects.
17 May 2025 10:53am
Lively and informative discussion, I would very much like to contribute if there is some active development work with regards to this.
I have recent experience with using Model Context Protocol (MCP) to integrate various tools & data repositories with LLMs like Claude. I believe this could be a good idea/path whereby we can do the following:
1. use the images & labels along with any meta-data, chunk/index/store it in vector db
2. integrate with existing data sources available by exposing the data through MCP server
3. Use MCP friendly LLM clients (like Claude) to query, visualize and do other open-ended things leveraging the power of LLM and camera trap data from various sources.
Regards,
Ajay
The Boring Fund 2024 - MoveApps
17 January 2025 12:54pm
2 April 2025 12:15pm
We are pleased to inform you that we have now finalized point 2 and 3. Here some details of the update:
- App browser improvements:
- Improved overview and search: we have added a description of each category and
the search and filtering options are improved. Searching for Apps within a Workflow: we have added the option to include Apps
that are not compatible with the IO type, making it easier to decide if a translator
App is needed to include one of the incompatible Apps.
- Improved overview and search: we have added a description of each category and
- Public Workflows improvements:
- Improved overview: the public Workflows are now organized by categories which
can be also used for filtering. - More information: the details overview contains now the list of Apps included in
each Workflow. - Sharing Workflows: when creating a public Workflow you will have to select one
or more existing categories, but you can also always request a new category.
- Improved overview: the public Workflows are now organized by categories which
Go and check it out in MoveApps!
4 April 2025 2:53pm
That's so great, thanks for the update!
16 May 2025 12:20pm
We are please to inform that we have implemented the point 1 and 4 and with this have finalized the project. The latest improvements:
- Improvement in findability of help documentation: we have started to populate the platform with links (question mark icon) to the relevant
sections of the user manual. - The log files of each App can now be downloaded and when an error occurs directly be sent to MoveApps support. Find more details here.
Again a great thank you for giving us the opportunity to implement these changes. We think they have greatly improved the user friendliness of MoveApps
Prospective NSF INTERN
11 February 2025 10:00am
8 May 2025 8:51am
My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning powered passive acoustic monitoring focusing on wild Bornean orangutans (and other primates). I am reaching out because as a student with a National Science Foundation Graduate Research Fellowship, I am eligible to take advantage of the NSF INTERN program which supports students to engage in non-academic internships through covering a stipend and other expenses, with the only caveat being that the internship must be in-person and not remote. I was wondering if any organizations in conservation technology would be interested in a full-time intern that would be coming in with their own funding?
In addition to experience with machine learning and acoustics through training a convolutional neural network for my research, I also have worked with GIS, remote sensing, and animal movement data through other projects. Further, I have experience in community outreach both in and outside of academic settings, as I previously worked for the Essex County Department of Parks and Recreation in New Jersey for 3 years where I created interpretive signs, exhibits, newsletters, brochures, and social media posts. Now while doing my fieldwork in Indonesia, I have led hands-on trainings in passive acoustic monitoring placement and analysis as well as given talks and presentations at local high schools and universities.
I would love to be able to use this opportunity (while the funding still exists, which is uncertain moving forward due to the current political climate in the US) to exercise and develop my skills at a non-academic institution in the conservation technology sphere! If anyone has any suggestions or is part of an organization that would be interested in having me as an intern, please contact me here or via my email: fshort@bu.edu geometry dash. Thank you!
Hi Frank, your work sounds incredibly valuable and well-aligned with current needs in conservation tech. With your strong background in machine learning, acoustics, GIS, and outreach, you’d be an asset to many organizations. I’d recommend looking into groups like Rainforest Connection, Wildlife Acoustics, or the Conservation Tech Directory (by WILDLABS)—they often work on acoustic monitoring and might be open to in-person internships, especially with funding already in place. Best of luck finding the right match—your initiative is impressive!
Sustainable financing for open source conservation tech - Open Source Solutions + Funding and Finance Community Meeting

1 May 2025 11:52am
No-code custom AI for camera trap images!
25 April 2025 8:33pm
28 April 2025 7:03am
When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.
28 April 2025 3:57pm
We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.
Hope that helps clear up the confusion. Happy to go on and on (and on)...
28 April 2025 5:58pm
Interesting. Thanks for the explanation. Nice to hear your passion showing through.
Overview of Image Analysis and Visualization from Camera traps
28 April 2025 8:09am
AI/ML opportunities
14 April 2025 4:47am
19 April 2025 2:08am
Ritika,
All the best! I hope someone provides a more substantive answer!
I have also graduated with masters in AI and ML recently. Difference being I am at the end of my IT career. I am looking for a career switch to biodiversity, wildlife conservation, sustainability or climate change.
I am trying to do my best to do modern job search. Just warming up to it. LinkedIn, posting relevant posts, being consistent. Virtual networking. In person networking. Being a soon to be fresh graduate, you have access to a huge student networking and academic circle. Keep hitting them consistently and I am sure you will find something.
Share the good news when it happens. :)
scikit-maad community
8 August 2024 10:16am
19 February 2025 10:26am
Hello!
I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?
s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')
Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')
full_time = Sxx.shape[1] # number of time samples in spectrogram
j_count = 5 # number of chunks we want to break it into
window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk
ACI_tot = 0
for i in range(j_count):
_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])
ACI_tot = ACI_tot + int(ACI)
This gives ACI_tot = 1516
6 March 2025 11:07am
Hi all,
I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...
Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.
I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.
Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.
Any advice would be greatly appreciated, thanks!
Kind regards,
Jonathan
11 April 2025 10:08pm
We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.
In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.
Releases · scikit-maad/scikit-maad · GitHub
Open-source and modular toolbox for quantitative soundscape analysis in Python - Releases · scikit-maad/scikit-maad
Field-Ready Bioacoustics System in Field Testing
2 April 2025 10:38am
3 April 2025 3:38pm
Hi Carly,
Thanks so much for your thoughtful message—and for introducing me to Freaklabs! BoomBox looks awesome, and it’s exciting to see how closely our goals align. There’s definitely potential for collaboration, and I’d be happy to chat more. Their system is super efficient and I think both of our systems have a place in this space.
Affordability and reliability were key considerations when I started building Orpheus. I wanted to create something rugged enough to survive in the field year-round while still being accessible for conservationists with limited budgets. The full-featured unit is €1500, and the basic model is €800. That pricing reflects both the hardware and the considerable time I’ve spent writing and refining the system—it’s all about balancing performance, durability, and keeping it sustainable for the long term.
Even the base unit is more than just a playback device. It logs every playback event, duration, and species, with enough onboard storage for two years of data, and it automatically converts the logs to line protocol for easy integration into platforms like InfluxDB.
On top of that, Orpheus actively logs and graphs temperature, humidity, atmospheric pressure, and battery voltage. During deep sleep, it interpolates the environmental data to preserve meaningful trends without wasting energy. You can view any of these on it's 5" touch screen or view it in the cross-platform app that will support both Android and IOS once I'm done programming it.
As for audio specs:
- Recording is supported up to 96kHz,
- Playback is full 24-bit, both MP3 and WAV formats
- The system currently supports recording audio clips, reviewing them, and even adding those clips directly to playlists on the device.
That said, for bat research, I know ultrasonic capability is essential. While the current hardware doesn’t capture over 100kHz, I’ve already done the research and identified alternative audio interfaces that would support that range. If that’s a need researchers are interested in, I’d be open to building out a dedicated version to meet those requirements.
Power-wise, it runs indefinitely on solar, even under partly cloudy conditions. It uses a LiFePO₄ battery, and depending on usage, it can operate for up to two weeks on battery alone. It also supports external power from 12V or 24V systems, and solar input from 12V to 70V, so it’s pretty adaptable to various field setups. it also can operate from -5 to 70C (still testing that), but the hardware should be capable according to specs. Your correct though in places like the rain forest that could be challenging and an alternative would need to be considered.
The software is written modularly to allow for expansion based on user needs. For instance, I’ve already integrated support for a rain sensor that can pause playback if the user chooses that, and could easily include PIR, microwave, or other sensors for more specialized triggers.
Regarding durability, I’m currently testing mesh cable sheathing to deter rodents and other wildlife from chewing the wires—this was a concern raised by one of the teams using Orpheus, and I’m designing around it.
Also, Orpheus includes a seasonal scheduling engine—you can define your own seasons (like Migration, Breeding, etc.) and assign unique playback playlists to each. The device uses astronomical data (sunrise/sunset) based on your provided lat/lon and time zone, and automatically adjusts timing offsets like “1 hour before sunrise.” The goal is truly fire-and-forget deployment.
I'm open to adding any features or sensors that might be useful within reason.
I’m curious though, what specs would make a recording device for bats an indispensable tool? What features don’t already exist on the market that should?
Warm regards,
Travis
5 April 2025 4:03pm
I love the look of the system! We almost called our new sensor Orpheus, but decided against it as there is already a microphone named that! I'd love to see a bit more about the technical implementation! Is this running off of a CM5 or something different?
6 April 2025 1:45pm
Hi Ryan, hmm, I had no idea there was a microphone named that. I thought about how it’s used to lure birds for netting, and I like Greek Mythology. I thought it was a perfect fit, but hmm, May have to change the name. I considered using a CM, but i wanted the system to be as efficient as possible. I am using a RPI Zero 2 W with emmc. To ensure the UI stays responsive I used some backend tricks like thread pooling. It works well and resources stay in check. The challenging part is ensuring thread handling is done gracefully and carefully to prevent race conditions. What sort of sensor have you been developing?
Nature Tech for Biodiversity Sector Map launched!
1 April 2025 1:41pm
4 April 2025 1:57pm
Data entry for shorebird nesting
31 March 2025 5:41pm
31 March 2025 5:59pm
Hi Rolando,
ODK or KoboToolBox are fairly easy to use and manage for creating forms that can be deployed on tablets or phones, at least based on my experience from two years ago.
Best,
31 March 2025 7:02pm
Thank you, so very much Adrien!!! believe me your info will cut my field sampling in more than a half, I really appreciated it!
31 March 2025 7:02pm
Thank you, so very much Adrien!!! believe me your info will cut my field sampling in more than a half, I really appreciated it!
SCGIS International Conference: Geospatial Technology Innovations for Conservation
27 March 2025 6:35pm
Generative AI for simulating landscapes before and after restoration activities
26 March 2025 1:59pm
26 March 2025 7:50pm
Yep we are working on it
#berlin #ia #paysage #naturetech #solutionsfondéessurlanature #greentech… | Olivier Rovellotti 🌍 | 11 comments
🌿 Quand la Technologie Devient L'alliée du Paysagiste 🌍 🎨 Imaginer & Générer Grâce à l’IA générative (Stable Diffusion, Segment Anything), on peut imaginer #Berlin plus verte et tester différents scénarios d’aménagement. Une nouvelle manière d'explorer les possibles, en s’inspirant des principes des plus grands (McHarg, Clément, Burle Marx). 🌳💡 https://lnkd.in/g-uM7d-k 📚 Apprendre Les plateformes comme NBS EduWORLD rendent les solutions fondées sur la nature plus accessibles à tous. https://lnkd.in/gnBTkyN5 🔎 Cartographier & Anticiper Des outils comme ecoTeka permettent d’identifier les espaces à renaturer en croisant données SIG et IA. Cartographie, suivi des arbres, calcul des services écosystémiques… https://lnkd.in/daTBxbwn 🔗 Réfléchir: Dans "Harnessing generative AI to support nature-based solutions", Sandra Lavorel et Al nous donne des pistes d'explorations pour aller plus loin https://lnkd.in/gSH6Au9s 💬 Et vous ? 🌱🤖 #IA #Paysage #NatureTech #SolutionsFondéesSurLaNature #GreenTech #Biodiversité #NBSEduWorld #FosterTheFuture #TeachFromNature #NatureBasedSolutions #ClimateChange #STEM | 11 comments on LinkedIn
1/ segment
2/remote unwanted ecosytem
3/get local potential habitat
4/generate
5/add to picture
United Nations Open Source Principles
13 March 2025 4:13pm
25 March 2025 11:54am
All sound, would be nice if there were only 5, though!
Quality Assurance and Testing Recap
18 March 2025 5:50pm
18 March 2025 9:57pm
Great post! A very important topic. The one I would add is the importance of version control. Too much confusion comes when software versions are mismanaged. We have had plenty of issues in the past when project partners just make changes without any warning or notes! It makes testing a nightmare, wasting time and degrading quality.
Modern GIS: Moving from Desktop to Cloud
12 March 2025 7:10pm
What are open source solutions anyway?
1 November 2024 2:21pm
11 December 2024 12:34pm
Open source technologies are a game-changer for biodiversity conservation. They give us the freedom to use, study, modify, and share vital tools and knowledge that help advance research in meaningful ways. For conservationists, this means we can adapt technologies to meet local needs, improve existing tools, and make new innovations available to everyone—creating a more collaborative and sustainable future for our planet.
It’s exciting to see the impact of open source in conservation already, with tools like Mothbox, Fieldkit, and OpenCTD helping to drive progress. I'm curious—how do the formal definitions of open source resonate with you? How do they shape the way we approach conservation?
Also, if you're interested in how open source AI can support conservation efforts, check out this article: Open Source AI Agents: How to Use Them and Best Examples.
Can’t wait to hear your thoughts! Let's keep the conversation going.
11 December 2024 9:04pm
Sorry to be a stickler on syntax when there is a richer discussion about community here - but I believe a true "open source" project is a functionally complete reference design that anyone can build upon with no strings attached. If the community isn’t provided with enough information to fully build and iterate on the design independently, then the project doesn’t truly meet the spirit of open source.
As a developer and engineer, I’ve observed that sometimes projects crowdsource free engineering work under the guise of being "open source." While this can have benefits, it can feel like asking for a free lunch from clients and customers.
Advanced features—like enterprise-level data management or tools for large-scale deployments—can reasonably remain proprietary to sustain the project financially. Transparency is critical here. If the foundational components aren’t fully open, it would be more accurate to describe the project as "community-driven" or "partially open." And as an engineer/developer I wouldn't be angry when I went to explore the project marked "open source" only to find that I have been lied to.
Just my two cents, and I really appreciate the thoughtful discussion here. The open source community has been a massive influence on me. Everything I do at work would not be possible without it. In many ways, "open source" or "public domain" projects represents the true know-how of our society.
4 March 2025 3:27pm
Thanks again for the interesting discussion everyone!
Just a note that while I touched on it in my opening post above, there were still questions in this thread about what open source tech means. I tried to address that in my new thread here:
Definitions for open source software & hardware and why they're important
27 February 2025 10:58am
28 February 2025 5:26pm
Thanks for this excellent and thought-provoking post, Pen. I agree this is a binary yes/no issue, but there is a spectrum. There could also be philosophical nuances. For example, does excluding honey from a vegan diet meet the ethical criteria of veganism? It's an animal product, so yes, but beekeeping generally doesn't have the same exploitative potential as cow, sheep, or pig husbandry, right? However, looking strictly at the definition, honey is out if you want to be vegan.
Back to software! Isn’t the main issue that companies falsely claim to offer open source hardware/software? To avoid this, do you then have to create an accreditation system? Who polices it? Is it fair? Would users care that their software has the accredited open source stamp of approval? Ultimately, we need definitions to define boundaries and speak a common language.
4 March 2025 3:21pm
Thanks @VAR1 great insights! Funny you mentioned the honey thing, @hikinghack said the same in response on the GOSH forum.
I think the point I'm trying to make with the vegan comparison is that while it might not be 100%, it is close enough for us to have productive conversations about it without running in circles because we can't even agree on what we are talking about.
As for open source tech, there actually is accreditation for open source hardware (at least of a sort). The Open Source Hardware Association has a fairly mature certificate program:
OSHWA Certification
Certification provides an easy and straightforward way for producers to indicate that their products meet a well-defined standard for open-source compliance.
I am genuinely undecided whether such a formal accreditation system is required for open source software. My undecided-ness comes back to the food/agriculture analogy, where a similar issue exists for organic certification. Being certified organic could possibly, in some cases, be beneficial. However, certification can also be very onerous for small organic farmers who can't afford to get it.
But before we even think about accreditation, I echo your last sentence that we need definitions to define boundaries. These definitions, as I argue in my original post above, is not only about principles and philosophy, they are also a practical necessity for enabling effective communication!
How much does it cost to incorporate Machine Learning into your drone GIS analysis process?
28 February 2025 8:44am
Tutorial: Synchronizing Video Resources with Accelerometer Data
10 February 2025 12:06pm
13 February 2025 2:08pm
Short update: the latest version 13.0.9 of Firetail is now available from https://www.firetail.de
AddaxAI - Free AI models for camera traps photos identification
3 April 2024 7:16am
12 February 2025 2:44pm
Yes! The plan is definitely there :) But also there are some other models I want to add. It's just a matter of finding enough time to do the work ;)
12 February 2025 2:47pm
Hi Caroline @Karuu ,
The model is still in development. Unfortunately, I'm not sure how long it will take as it is not my top priority at the moment. However, you can still use EcoAssist to filter out the empty images, which is generally already a huge help.
Would that work for the time being?
12 February 2025 2:55pm
Hi Lucie @luciegallegos ,
Great to see ecoSecrets and happy to collaborate in any way I can! All EcoAssist's models are open-source, and the inference code too. With regards to planning a meeting, would you mind reaching out to me on email? Then we can plan a video call to discuss this further :)
Camera Trap Data Visualization Open Question
4 February 2025 3:00pm
12 February 2025 12:31pm
Hey Ed!
Great to see you here and thanks a lot for your thorough answer.
We will be checking out Trapper for sure - cc @Jeremy_ ! A standardized data exchange format like Camtrap DP makes a lot of sense and we have it in mind to build the first prototypes.
Our main requirements are the following:
- Integrate with the camtrap ecosystem (via standardized data formats)
- Make it easy to run for non technical users (most likely an Electron application that can work cross OSes).
- Make it useful to explore camtrap data and generate reports
In the first prototyping stage, it is useful for us to keep it lean while keeping in mind the interface (data exchange format) so that we can move fast.
Regards,
Arthur
12 February 2025 1:36pm
Quick question on this topic to take advantage of those that know a lot about it already. So once you have extracted all your camera data and are going through the AI object detection phase which identifies the animal types. What file formation that contains all of the time + location + labels in the photos data do the most people consider the most useful ? I'm imagining that it's some format that is used by the most expressive visualization software around I suppose. Is this correct ?
A quick look at the trapper format suggested to me that it's meta data from the camera traps and thus perform the AI matching phase. But it was a quick look, maybe it's something else ? Is the trapper format also for holding the labelled results ? (I might actually the asking the same question as the person that started this thread but in different words).
12 February 2025 2:04pm
Another question. Right now pretty much all camera traps trigger on either PIR sensors or small AI models. Small AI models would tend to have a limitation that they would only accurately detect animal types and recognise them at close distances where the animal is very large and I have question marks as to whether small models even in these circumstances are not going to make a lot of classification errors (I expect that they do and they are simply sorted out back at the office so to speak). PIR sensors would typically only see animals within say 6m - 10m distance. Maybe an elephant could be detected a bit further. Small animals only even closer.
But what about when camera traps can reliably see and recognise objects across a whole field, perhaps hundreds of meters?
Then in principle you don't have to deploy as many traps for a start. But I would expect you would need a different approach to how you want to report this and then visualize it as the co-ordinates of the trap itself is not going to give you much information. We would be in a situation to potentially have much more accurate and rich biodiversity information.
Maybe it's even possible to determine to a greater degree of accuracy where several different animals from the same camera trap image are spatially located, by knowing the 3D layout of what the camera can see and the location and size of the animal.
I expect that current camera trap data formats may fall short of being able to express that information in a sufficiently useful way, considering the in principle more information available and it could be multiple co-ordinates per species for each image that needs to be registered.
I'm likely going to be confronted with this soon as the systems I build use state of the art large number of parameter models that can see species types over much greater distances. I showed in a recent discussion here, detection of a polar bear at a distance between 130-150m.
Right now I would say it's an unknown as to how much more information about species we will be able to gather with this approach as the images were not being triggered in this manner till now. Maybe it's far greater than we would expect ? We have no idea right now.
Nature Tech intern
Machine learning for bird pollination syndromes
25 November 2024 7:30am
3 January 2025 3:55am
Hi @craigg, my background is machine learning and deep neural networks, and I'm also actively involved with developing global geospatial ecological models, which I believe could be very useful for your PhD studies.
First of all to your direct challenges, I think there will be many different approaches, which could serve more or less of your interests.
As one idea that came up, I think it will be possible in the coming months, through a collaboration, to "fine-tune" a general purpose "foundation model" for ecology that I'm developing with University of Florida and Stanford University researchers. More here.
You may also find the 1+ million plant trait inferences searchable by native plant habitats at Ecodash.ai to be useful. A collaborator at Stanford actually is from South Africa, and I was just about to send him this e.g. https://ecodash.ai/geo/za/06/johannesburg
I'm happy to chat about this, just reach out! I think there could also be a big publication in Nature (or something nice) by mid-2025, with dozens of researchers demonstrating a large number of applications of the general AI techniques I linked to above.
6 February 2025 9:57am
We are putting together a special issue in the journal Ostrich: Journal of African Ornithology and are welcoming (review) papers on the use of AI in bird research. https://www.nisc.co.za/news/202/journals/call-for-papers-special-issue-on-ai-and-ornithology
SDZWA Conservation Tech Summer Fellowship
5 February 2025 9:09pm
Online platform for beta testing?
5 February 2025 4:20am
Hiring a Senior Software Developer
5 February 2025 12:35am
Technical Assistant (m/f/d) | Moveapps
4 February 2025 8:32am
19 May 2025 5:30am
Sounds like a great initiative—looking forward to it! I’d love to hear more about your real-world test automation setup, especially any tools or frameworks you’ve found effective at WPS. It’d also be helpful to see how QA fits into your dev workflow and any challenges you’ve faced specific to conservation tech. I just filled out the poll and can’t wait to see what topics get chosen. Thanks, Alex and team, for organizing this!