Song of The Cricket
4 May 2025 11:01am
Maps, Education and Community
21 May 2025 5:46pm
Feedback on PCB for Mothbox
2 May 2025 8:50pm
3 May 2025 7:45am
Nice seeing what you are doing here. Those converters are nice, I’m using them with my portable thermal cameras.
do you have solar power intentions here ? If so, maybe you need a way to detect when the battery is almost out of power?
3 May 2025 2:22pm
The mothbox can currently be attached to a solar panel super easy (just plug in a barrel jack up to 20v 80 watts) and it charges the talentcell battery. We also monitor the power with an adafruit INA260 which can tell if the voltage is getting low. Ideally if we get enough time that will get built into the PCB too!
Post-Doctoral Research Fellow/ Project Manager, University of St. Andrews, UK
2 May 2025 4:31pm
'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes
5 February 2025 5:55pm
8 February 2025 4:29pm
Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.
2 May 2025 2:59pm
Hi everyone, following up here with a summary of our workshop!
The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador. Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.
Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.
The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.
We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.
Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy. All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching.
Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.
Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.
- The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
- Keep the objectives, number of packages, and size of sample datasets to a minimum.
- Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.
I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.
2 May 2025 2:59pm
A standard for bioacoustic data - Safe and Sound
2 May 2025 9:51am
AI Edge Compute Based Wildlife Detection
23 February 2025 5:24am
29 April 2025 3:20pm
Sorry, I meant ONE hundred million parameters.
The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...
Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.
Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable.
29 April 2025 3:34pm
Ah I see, thanks for clarifying.
BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.
My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)
The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.
I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.
But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.
1 May 2025 7:32pm
@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.
Animal Detect is live
30 April 2025 10:05am
1 May 2025 4:38pm
Super happy to finally have Animal Detect ready for people to use. We are open for any feedback and hope to bring more convenient tools :)
Sustainable financing for open source conservation tech - Open Source Solutions + Funding and Finance Community Meeting

1 May 2025 11:52am
OpenAgTech newsletter: Bringing you the latest in open-source tech for agriculture.
1 May 2025 9:24am
Open Source Agriculture Repository
1 May 2025 9:18am
Detecting animals' heading and body orientation
2 April 2025 10:53am
23 April 2025 8:08pm
Thank you Phil,
That sounds as if it might work (but probably with a turn trigger of around 45 degrees), and the baboon collar is well within the weight limit. Where can I find more details about the collar?
Peter
25 April 2025 10:50am
Hi Peter, Just tell me exactly what you are looking for. I have commissioned these collars from the engineer who originally made my Virtual Fence back in 2016 (still working). The aim is to have a long life while also taking regular readings (5 - 10min) so that animals cannot invade croplands or villages without being detected before they can be do any damage. We have tried to include all possible features that will be useful, while still maintaining low weight and simplicity. Hence no solar and external antennae outside the housing.
Cheers, Phil
1 May 2025 8:50am
Thanks Phil - I have e-mailed you.
Peter
CMS Survey on Ecological Connectivity and Infrastructure
30 April 2025 10:03pm
We are releasing SpeciesNet
3 March 2025 4:48pm
28 April 2025 12:30pm
This is great news!
I am using rather high resolution images and have just ordered some 4K (8MP) camera traps.
The standard megadetector run via Addax AI is struggling a bit with detecting relatively small animals (frame wise) although they have quite a number of pixels. This naturally follows from the resizing in megadetector.
I have noticed :
MegaDetector/megadetector/detection/run_tiled_inference.py at 472460d7da7de84027282841b5b775664a4305ed · agentmorris/MegaDetector · GitHub
MegaDetector is an AI model that helps conservation folks spend less time doing boring things with camera trap images. - MegaDetector/megadetector/detection/run_tiled_inference.py at 472460d7da7de84027282841b5b775664a4305ed · agentmorris/MegaDetector
but this seem not readilly available in Addax AI. Is it somehow supported in SpeciesNet?
Cheers,
Lars
30 April 2025 11:29am
This scenario is not supported by SpeciesNet, but if your species are well supported in its training data, maybe we can work out a custom setup. Can you share what species you're seeing/expecting as "small animals"?
30 April 2025 7:00pm
Hi Ștefan!
In my current case, I am trying to detect and count Arctic fox pups. Unfortunately, Arctic fox does not seem to be included in the training data of SpeciesNet but even if it was, pups look quite different from adults.
After a quick correspondance with Dan Morris and Peter van Lunteren on the Addax AI gitHub I was made aware of the image size option of MegeDetector. It seem to help somewhat to run the detection at full resolution (in my case up to 1920*1080). I have the impression that I get more good detections, but also less false detections (even without repeat_detection_elimination) by using higher resolution.
Dan offered to have a look at my specific challenge so I sent him 10K+ images with fox pups.
Remote Bat detector
8 April 2025 3:25pm
12 April 2025 10:16am
You will likely need to have an edge-ML set-up where a model is running in real-time and then just sending detections. Birdweather PUC and haikubox do this running birdnet on the edge, and send over bluetooth/wifi/sms- but you'd have to have these networks already set up in places you want to deploy a device. Which limits deployments in many areas where there is no connectivity
13 April 2025 3:13pm
So we are building some Bugg devices on licence from Imperial College, and I am making a mod to change the mic out for a version of out Demeter microphone. This should mean we can use it with something like BattyBirdNet, as its powered by a CM4. Happy to have a chat if you are interested. Else it will likely need a custom solution, which can be quite expensive!
30 April 2025 2:43pm
There are a lot of parameters in principle here. The size of the battery. How much time in the field is acceptable before a visit? Once a week? Once a month? How many devices you need? How small does the device and battery have to be etc. I use vpns over 4G USB sticks.
I ask because in principle I've build devices that can retrieve files remotely and record ultrasonic. Though the microphones I tested with (Peterssen) are around 300 euros in price. But I've been able to record USB frequencies and also I believe it will be possible to do tdoa sound localization with the output files if they can be heard on 4x recorders.
But we make commercial devices and such a thing would be a custom build. But it would be nice to know what the demand for such a device is to know at which point it becomes interesting.
Connecting the Dots: Integrating Animal Movement Data into Global Conservation Frameworks

30 April 2025 1:38am
Fires in the Serengeti: Burn Severity & Remote Sensing with Earth Engine
29 April 2025 6:16pm
1 May 2025 11:44am
10 June 2025 5:39pm
Sticky Pi: A smart insect trap to study daily activity in the field
29 April 2025 3:25pm
Insect Detect: Build your own insect-detecting camera trap!
29 April 2025 3:21pm
ecoTECH YouTube webinar playlist by Biological Recording Company
29 April 2025 1:52pm
How are you affected by the USA cutbacks and tariffs ?
26 February 2025 8:01pm
29 April 2025 9:55am
Hej all, I updated my question to include discussion about the tariffs as well.
Reducing wind noise in AudioMoth recordings
23 June 2020 2:26am
28 April 2025 1:04pm
Just following up on this, we are suffering from excessive wind noise in our recordings. We have bought some dead cats, but our audiomoths are in the latest Dev case (https://www.openacousticdevices.info/audiomoth).
In your collective experience, would anyone recommend trying to position the dead cat over the microphone on the Audiomoth itself, or covering the entry port into the device, from either the inside or the outside?
Cheers,
Tom
28 April 2025 10:23pm
As reported in this thread:
I have used Røde Dead Kitten wind jammers wrapped around the original AudioMoth cases.
28 April 2025 10:28pm
Hi Tom! I think the furry windjammer must be outside the casing to have the desired effect. It can be a bit tricky having this nice furry material that birds and other critters might be attracted to. It may be possible to make an outer "wire cage" to protect the wind jammer. We once had to do this to protect a DIY AudioMoth case against foxes wanting to bite the case (no wind jammer then). You may however create new wind issues with wind noise created by this cage... No-one said it had to be simple!
Experience with AudioMoth Dev for Acoustic Monitoring & Localization?
3 April 2025 2:40pm
28 April 2025 1:41pm
Hi Walter,
Thanks for your reply! It looks like the experiments found very minor time offsets, which is encouraging. Could you clarify what you mean by a "similar field setup"?
In my project, I plan to monitor free-ranging animals — meaning moving subjects — over an area of several square kilometers, so the conditions won't be exactly similar to the experimental setup described.
Given that, would you recommend using any additional tools or strategies to improve synchronization or localization accuracy?
28 April 2025 1:55pm
Hi Ryan,
Thanks for your reply! I'm glad to hear that the AudioMoth Dev units are considered powerful.
Have you ever tried applying multilateration to recordings made with them? I would love to know how well they perform in that context.
On a more technical note, do you know if lithium batteries (such as 3.7V LiPo) can provide a reliable power supply for Dev units in high temperature environments (around 30–50°C)?
Thanks,
Dana
28 April 2025 8:03pm
Hi Lana,
"similar field setup" means that the vocalizing animal should be surrounded by the recorders and you should have at least 4 audiomoths recording the same sound, then the localization maths is easy (in the end it is a single line of code). With 3 recorders that receive the sound localization is still possible but a little bit more complicated. With 2 recorders you get only some directions (with lift-right ambiguity).
Given the range of movements and assuming that you do not have a huge quantity of recorders do 'fence' the animals, I would approach the tracking slightly different. I would place the Audiomoth in pairs using a single GPS receiver powered by one recorder but connect the PPS wire also to the other recorder. Both recorders separated by up to 1 m are facing the area of interest. For the analysis, I would then use each pair of recorders to estimate the angle to the animal. If you have the the same sound on two locations, you will have 2 directions, which will give you the desired location. The timings at the different GPS locations may result in timing errors, but each direction is based on the same clock and the GPS timing errors are not relevant anymore. It you add a second microphone to the Audiomoths you can improve the direction further. If you need more specific info or char about details (that is not of general interest) you can pm me.
No-code custom AI for camera trap images!
25 April 2025 8:33pm
28 April 2025 7:03am
When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.
28 April 2025 3:57pm
We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.
Hope that helps clear up the confusion. Happy to go on and on (and on)...
28 April 2025 5:58pm
Interesting. Thanks for the explanation. Nice to hear your passion showing through.
First battery-only test of the SBTS thermal smart camera system
16 March 2025 6:25pm
19 April 2025 3:16pm
Nice thermal camera study with the product you mention. I think it’s the first serious work with thermal AI models. Kudos to the developers.
You should be aware of the resolution difference with this and the one I present with this article. The one above is a flir lepton with 160x120 pixels resolution, whereas the one I’m presenting is 640x512 resolution, 45x more pixels. That’s a lot of extra resolution.
The DOC AI unit above is cheaper than the unit I present which it also includes AI object detection.
I can offer a range of resolutions. Currently 640x512 and 384x288 and 1280x1024 as well as a large range of lenses including zoomables lens if you like such as 30-180mm thermal lens together with a 1280x1024 resolution thermal sensor.
28 April 2025 4:58pm
Thanks for the insight Kim! It's awesome what you are doing! I am excited for any updates.
My collegues who are looking into getting a customised 2040 DOC AI Thermal camera need something that has the battery life to be left in the field for weeks due to the remoteness of the survey sites.
28 April 2025 5:56pm
Weeks with continuous inference would require a pretty big battery. I expect you would need some kind of customisation and maybe quite a bit of compromise to last weeks and on a single battery. Good luck with that. Power management is challenging.
From Field to Funder: How to communicate impact?
16 April 2025 3:51pm
17 April 2025 5:04pm
Great questions @LeaOpenForests !
I don't have concrete answers since I am not a stakeholder in any project in particular. Based on experience with research on the potential for a similar one-stop-shop for science metrics, I would suggest that there is no simple solution: different actors do need and have different views on presenting and viewing impact. This means possible gaps between what one group of actors need and what the other is willing or able to produce. One can hope, search and aim for sufficient overlap, but I don't see how they would necessarily or naturally overlap.
Still, I would guess that if there are dimensions of overlap, they are time, space and actor-networks
28 April 2025 3:18pm
I have posted about this in a different group, but I love boosting the impact of my communication through use of visuals.
Free graphics relating to conservation technology and the environment are available at:
National Environmental Science Program Graphics Library
Graphics below of a feral cat with a tracking collar and a cat grooming trap are examples of symbols available courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au.
UMCES Integration and Application Network Media Library

Overview of Image Analysis and Visualization from Camera traps
28 April 2025 8:09am
What is the best light for attracting moths?
17 October 2022 3:12pm
21 March 2025 12:21pm
I found this thread on inaturalist really helpful when considering options! Lots of cost effective set ups to consider. I only really do mothing, so this is moth specific, but perhaps helpful for other insects.
I love that folks also mentioned using an additional flashlight or outward facing light to draw in moths from farther away. I've tried that as well and it always seemed to boost the number of moths on my sheet.
For continuity, if the light goes away for more than a few seconds I feel like the spell is broken and they fly away. But this could be tested further. Curious if blinking makes a difference.
19 April 2025 12:59pm
For a cheaper, accessible option (Australia) I have found success with this set up recommended by the Entomological Society of Victoria for citizen science moth light trapping:
- Light: Gecko 50W Insect Control Replacement UV Lamp $34.13 AUD widely available at Bunnings (Hardware Store).
- Plug: E27 Cable Cord With Switch Au Light Bulb Holder Socket $15.00 AUD available online at Dick Smith (Home goods & Electronics Store).
28 April 2025 5:09am
Otherwise Australian Entomology Supplies has a few options available such as UV lights and black lights, such as a this portable UV lamp. They can ship internationally.
Free graphics for conservation tech communications
27 April 2025 4:39pm
27 April 2025 9:03pm
Not directly conservation tech imagery, but we've used the open to contributions PhyloPic library and API on a few projects to get some cute and usable sillhouettes based on taxonomies.
PhyloPic
PhyloPic is an open database of free silhouette images of animals, plants, and other life forms, available for reuse under Creative Commons licenses. Download silhouettes for use in educational materials, research articles, and other projects.
28 April 2025 4:36am
Thanks for this! That's great! Also on a slightly different note - Unsplash is one of the better high quality stock image websites in terms of licenses, and most images are free to download. Although always be cautious of any species ID's, I have found that it's better to just take my own photos even if just on my phone.
Shutterstock vector graphics are not free but I have found it is great value for money, especially if you have Adobe illustrator or similar so can customise the graphics. They have a great range of graphics as well. You can do a month-to-month subscription for $53 AUD for 10 images / graphics per month.
21 May 2025 2:39pm