Global Feed

There's always something new happening on WILDLABS. Keep up with the latest from across the community through the Global view, or toggle to My Feed to see curated content from groups you've joined. 

Header image: Laura Kloepper, Ph.D.

discussion

Feedback on PCB for Mothbox

Hi folks! In 4 days I head to Seeedstudio in Shenzhen where we are going to try to produce a manufacturable Mothbox PCB! The idea behind this PCB is that someone who wants a...

2 0

Nice seeing what you are doing here. Those converters are nice, I’m using them with my portable thermal cameras.


do you have solar power intentions here ? If so, maybe you need a way to detect when the battery is almost out of power?

The mothbox can currently be attached to a solar panel super easy (just plug in a barrel jack up to 20v 80 watts) and it charges the talentcell battery. We also monitor the power with an adafruit INA260 which can tell if the voltage is getting low.  Ideally if we get enough time that will get built into the PCB too!

See full post
discussion

'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes

Thanks to WILDLABS 'Boring Fund' support, we are hosting a workshop on AI for biodiversity monitoring in Medellin, Colombia, April 21st to 24th. This is a followup discussion to...

4 14

Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.  

Hi everyone, following up here with a summary of our workshop!

The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador.  Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.

Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.

The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.

We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.  

Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy.  All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching. 

Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.

Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.

  • The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
  • Keep the objectives, number of packages, and size of sample datasets to a minimum.
  • Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.

I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.    

See full post
discussion

AI Edge Compute Based Wildlife Detection

Hi all!I've just come across this site and these forums and it's exactly what i've been looking for!I'm based in Melbourne Australia and since finishing my PhD in ML I've been...

17 1

Sorry, I meant ONE hundred million parameters.

The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...

Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.

Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable. 

Ah I see, thanks for clarifying.

BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.

My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)

The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.

I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.

But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.

@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.

See full post
discussion

Animal Detect is live

Hey everyone,WE ARE FINALLY LIVE!  After 8 months of hard work with @HugoMarkoff we are ready to present the first stable version of Animal Detect!Animal Detect is ...

1 4

Super happy to finally have Animal Detect ready for people to use. We are open for any feedback and hope to bring more convenient tools :) 

See full post
Link

Open Source Agriculture Repository

Repository for all things open-source in agricultural technology (agritech) development by Guy Coleman. This accompanies the OpenSourceAg newsletter. Aiming to collate all open-source datasets and projects in agtech in one place.

0
discussion

Detecting animals' heading and body orientation

Good day,I have a specific remote surveillance application that is proving to be much more of a challenge than I thought.I need to detect where (GPS fix) African wild dogs (25 kg...

12 1

Thank you Phil,

That sounds as if it might work (but probably with a turn trigger of around 45 degrees), and the baboon collar is well within the weight limit. Where can I find more details about the collar?

Peter

Hi Peter,  Just tell me exactly what you are looking for.  I have commissioned these collars from the engineer who originally made my Virtual Fence back in 2016 (still working).   The aim is to have a long life while also taking regular readings (5 - 10min)  so that animals cannot invade croplands or villages without being detected before they can be do any damage.   We have tried to include all possible features that will be useful, while still maintaining low weight and simplicity.  Hence no solar and external antennae outside the housing.  

Cheers, Phil

See full post
discussion

We are releasing SpeciesNet

We're extremely excited to announce (and open source) SpeciesNet today, our AI model for species recognition in camera trap images, capable of recognising more than 2000 animals...

21 20

This is great news!



I am using rather high resolution images and have just ordered some 4K (8MP) camera traps.

The standard megadetector run via Addax AI is struggling a bit with detecting relatively small animals (frame wise) although they have quite a number of pixels. This naturally follows from the resizing in megadetector.

I have noticed :

but this seem not readilly available in Addax AI. Is it somehow supported in SpeciesNet?

Cheers,

Lars

Hi Ștefan

In my current case, I am trying to detect and count Arctic fox pups. Unfortunately, Arctic fox does not seem to be included in the training data of SpeciesNet but even if it was, pups look quite different from adults. 

After a quick correspondance with Dan Morris and Peter van Lunteren on the Addax AI gitHub I was made aware of the image size option of MegeDetector. It seem to help somewhat to run the detection at full resolution (in my case up to 1920*1080). I have the impression that I get more good detections, but also less false detections (even without repeat_detection_elimination) by using higher resolution.

Dan offered to have a look at my specific challenge so I sent him 10K+ images with fox pups.

See full post
discussion

Remote Bat detector

Hi all,What devices are there in the market capable of recording bats but that can be remotely connected?I am used to work with audiomoths or SM4. But I wonder if there are any...

7 0

You will likely need to have an edge-ML set-up where a model is running in real-time and then just sending detections. Birdweather PUC and haikubox do this running birdnet on the edge, and send over bluetooth/wifi/sms- but you'd have to have these networks already set up in places you want to deploy a device. Which limits deployments in many areas where there is no connectivity

So we are building some Bugg devices on licence from Imperial College, and I am making a mod to change the mic out for a version of out Demeter microphone. This should mean we can use it with something like BattyBirdNet, as its powered by a CM4. Happy to have a chat if you are interested. Else it will likely need a custom solution, which can be quite expensive!

There are a lot of parameters in principle here. The size of the battery. How much time in the field is acceptable before a visit? Once a week? Once a month? How many devices you need? How small does the device and battery have to be etc. I use vpns over 4G USB sticks.

I ask because in principle I've build devices that can retrieve files remotely and record ultrasonic. Though the microphones I tested with (Peterssen) are around 300 euros in price. But I've been able to record USB frequencies and also I believe it will be possible to do tdoa sound localization with the output files if they can be heard on 4x recorders.

But we make commercial devices and such a thing would be a custom build. But it would be nice to know what the demand for such a device is to know at which point it becomes interesting.

See full post
article

Fires in the Serengeti: Burn Severity & Remote Sensing with Earth Engine

Fires in Serengeti and Masai Mara National Parks have burned massive areas this year. With Google Earth Engine, it's possible to quantify burn severity using the normalized burn ratio function, then calculate the total...

2 3
This was originally presented on 24 April, 2025 as part of a Geospatial Group Cafe. We will post the recording and highlights from other speakers of that session soon!
Thanks for presenting this during our Geospatial Cafe! Looks very useful! I wonder if, to get rid or the haze, smoke and clouds you could try using the...
See full post
Link

Sticky Pi: A smart insect trap to study daily activity in the field

Sticky Pis are scalable, smart sticky traps using a high frequency Raspberry Pi camera to automatically score when, which and where insects were captured. Author: Quentin Geissmann, 2023

0
Link

Insect Detect: Build your own insect-detecting camera trap!

Sharing this website that provides instructions on DIY hardware assembly, software setup, model training and deployment of the Insect Detect camera trap that can be used for automated insect monitoring. Authors: Maximilian Sittinger, Johannes Uhler, Maximilian Pink, Annette...

1
discussion

Reducing wind noise in AudioMoth recordings

Hi everyone. I'm wondering if anyone has tips for reducing wind noise in AudioMoth recordings. Our study sites are open paddocks and can be subject to high wind. Many...

6 0

Just following up on this, we are suffering from excessive wind noise in our recordings. We have bought some dead cats, but our audiomoths are in the latest Dev case (https://www.openacousticdevices.info/audiomoth).

In your collective experience, would anyone recommend trying to position the dead cat over the microphone on the Audiomoth itself, or covering the entry port into the device, from either the inside or the outside?

 

Cheers,

Tom

Hi Tom! I think the furry windjammer must be outside the casing to have the desired effect. It can be a bit tricky having this nice furry material that birds and other critters might be attracted to. It may be possible to make an outer "wire cage" to protect the wind jammer. We once had to do this to protect a DIY AudioMoth case against foxes wanting to bite the case (no wind jammer then). You may however create new wind issues with wind noise created by this cage... No-one said it had to be simple! 

See full post
discussion

Experience with AudioMoth Dev for Acoustic Monitoring & Localization?

Hi everyone,I am Dana, a PhD student. I’m planning to use AudioMoth Dev recorders for a passive acoustic monitoring project that involves localizing sound...

14 0

Hi Walter,

Thanks for your reply! It looks like the experiments found very minor time offsets, which is encouraging. Could you clarify what you mean by a "similar field setup"?

In my project, I plan to monitor free-ranging animals — meaning moving subjects — over an area of several square kilometers, so the conditions won't be exactly similar to the experimental setup described.

Given that, would you recommend using any additional tools or strategies to improve synchronization or localization accuracy?

Hi Ryan,

Thanks for your reply! I'm glad to hear that the AudioMoth Dev units are considered powerful.

Have you ever tried applying multilateration to recordings made with them? I would love to know how well they perform in that context.
 

On a more technical note, do you know if lithium batteries (such as 3.7V LiPo) can provide a reliable power supply for Dev units in high temperature environments (around 30–50°C)?

Thanks, 
Dana

Hi Lana,

"similar field setup" means that the vocalizing animal should be surrounded by the recorders and you should have at least 4 audiomoths recording the same sound, then the localization maths is easy (in the end it is a single line of code). With 3 recorders that receive the sound localization is still possible but a little bit more complicated. With 2 recorders you get only some directions (with lift-right ambiguity).

Given the range of movements and assuming that you do not have a huge quantity of recorders do 'fence' the animals, I would approach the tracking slightly different. I would place the Audiomoth in pairs using a single GPS receiver powered by one recorder but connect the PPS wire also to the other recorder. Both recorders separated  by up to 1 m are facing the area of interest. For the analysis, I would then use each pair of recorders to estimate the angle to the animal. If you have the the same sound on two locations, you will have 2 directions, which will give you the desired location. The timings at the different GPS locations may result in timing errors, but each direction is based on the same clock and the GPS timing errors are not relevant anymore. It you add a second microphone to the Audiomoths you can improve the direction further.  If you need more specific info or char about details (that is not of general interest) you can pm me.

See full post
discussion

No-code custom AI for camera trap images!

Thanks to our WILDLABS award, we're excited to announce that Zamba Cloud has expanded beyond video to now support camera trap images! This new functionality...

3 4

When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.

We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.

Hope that helps clear up the confusion. Happy to go on and on (and on)...

See full post
discussion

First battery-only test of the SBTS thermal smart camera system

Last night I connected my SBTS local AI, thermal-enabled smart camera to a battery and placed it outside overlooking a field. I hoped to get some video of deer, which I've seen...

5 0

Nice thermal camera study with the product you mention. I think it’s the first serious work with thermal AI models. Kudos to the developers.

You should be aware of the resolution difference with this and the one I present with this article. The one above is a flir lepton with 160x120 pixels resolution, whereas the one I’m presenting is 640x512 resolution, 45x more pixels. That’s a lot of extra resolution.

The DOC AI unit above is cheaper than the unit I present which it also includes AI object detection.

I can offer a range of resolutions. Currently 640x512 and 384x288 and 1280x1024 as well as a large range of lenses including zoomables lens if you like such as 30-180mm thermal lens together with a 1280x1024 resolution thermal sensor.

Thanks for the insight Kim! It's awesome what you are doing! I am excited for any updates. 

My collegues who are looking into getting a customised 2040 DOC AI Thermal camera need something that has the battery life to be left in the field for weeks due to the remoteness of the survey sites.

Weeks with continuous inference would require a pretty big battery. I expect you would need some kind of customisation and maybe quite a bit of compromise to last weeks and on a single battery. Good luck with that. Power management is challenging.

See full post
discussion

From Field to Funder: How to communicate impact?

Conservation involves a mosaic of actors — field practitioners, local communities, funders, government agencies, scientists, and more.Each one needs different levels of...

2 2

Great questions @LeaOpenForests !

I don't have concrete answers since I am not a stakeholder in any project in particular. Based on experience with research on the potential for a similar one-stop-shop for science metrics, I would suggest that there is no simple solution: different actors do need and have different views on presenting and viewing impact. This means possible gaps between what one group of actors need and what the other is willing or able to produce. One can hope, search and aim for sufficient overlap, but I don't see how they would necessarily or naturally overlap.

Still, I would guess that if there are dimensions of overlap, they are time, space and actor-networks 

I have posted about this in a different group, but I love boosting the impact of my communication through use of visuals. 

Free graphics relating to conservation technology and the environment are available at:

  1. National Environmental Science Program Graphics Library

    Graphics below of a feral cat with a tracking collar and a cat grooming trap are examples of symbols available courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au.

  2. UMCES Integration and Application Network Media Library
Feral cat with tracking collar courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au

Cat grooming trap graphic courtesy of the NESP Resilient Landscapes Hub, nesplandscapes.edu.au

See full post
discussion

What is the best light for attracting moths?

We want to upgrade the UV lights on our moth traps. We currently use a UV fluorescent tube, but we are thinking about moving to a LED setup, like the LepiLED or EntoLED. We think...

10 2

I found this thread on inaturalist really helpful when considering options! Lots of cost effective set ups to consider. I only really do mothing, so this is moth specific, but perhaps helpful for other insects.

I love that folks also mentioned using an additional flashlight or outward facing light to draw in moths from farther away. I've tried that as well and it always seemed to boost the number of moths on my sheet. 

For continuity, if the light goes away for more than a few seconds I feel like the spell is broken and they fly away. But this could be tested further. Curious if blinking makes a difference.

See full post
discussion

Free graphics for conservation tech communications

Visual communication means we’re all speaking the same language. Do you know of any conservation tech or general science graphics libraries?I find them helpful for presentations,...

2 2

Not directly conservation tech imagery, but we've used the open to contributions PhyloPic library and API on a few projects to get some cute and usable sillhouettes based on taxonomies.

Thanks for this! That's great! Also on a slightly different note - Unsplash is one of the better high quality stock image websites in terms of licenses, and most images are free to download. Although always be cautious of any species ID's, I have found that it's better to just take my own photos even if just on my phone.

Shutterstock vector graphics are not free but I have found it is great value for money, especially if you have Adobe illustrator or similar so can customise the graphics. They have a great range of graphics as well. You can do a month-to-month subscription for $53 AUD for 10 images / graphics per month.

See full post