Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.

discussion

Christmas wish list

If anyone has something like this on their Christmas wish list let me know. It's my new camera product, a Raspberry Pi 5 running in secure boot mode with an encrypted nvme...

5 0

Yes, I have these. I'm pretty sure that my company is the first company in the world to sell products running on Raspberry Pi 5 in secure boot mode as well :-)

I responded to your wish list. From your description a modified camera like a HIKvision turret might be able to do it for you.

great, this security cameras might be interesting for monitoring crop development and maybe other bigger pests like boars or some other herbivorous animals that could eventually go into the fields, but this is not what I'm trying to solve now. What I also have interest in is on the go weeds species monitoring, like on a robot, to create a high resolution map of weed infestation in a field (10-20 hectares). There eventually such modified camera could make sense. Thanks for the ideas!!

See full post
discussion

Tracking orangutans 

Hi all! I'm looking for a solution to track orangutans that get released in the wild after rehabilitation in collaboration with the Sumatran Orangutan Conservation Programme....

17 4

Depending on what you really need, you might have different options. I don't think collar/bracelet for great apes is currently resolved reliably, and could pose risks to the animals. Implantable transmitters are one option but more invasive. I think Chris Walzer (currently at WCS) has done some initial work on implanting orangutans (or gibbons), but that was quite some years ago. Others might have done it since as well. FWIW we have implanted captive chimpanzees with subcutaneous transmitters for other reasons and some have removed/damaged it, so that's a risk. Intraabdominal might be another (though even more invasive) option. 

Is  visual observation an option? This is well used in many species, though might be very costly/labour intensive. I have been involved in a project with howler and spider monkey where this has been used successfully, happy to put you in touch with them. 

If your questions is primarily absence/presence of certain individuals, you could also try DNA based methods? And someone suggested below individual ID from video/pictures, this has been done in captive great apes, so it is an option, but you still need to get the shots in the first instance, so probably not many advantages over manual monitoring methods. 

Our city-neighbours Robotto are a Drone AI-software company and have an ongoing animal tracking projects in Thailand, Australia, and Greenland (probably more by now) in co-op with WWF, using drones. 

Give them a look! I know Kenneth, their CEO, pretty well so can match you two.

A rough explanation of how the tracking process happens is:
A forest ranger brings a suitcase with a drone to a watchpoint, pilots the drone around the area for 30-50 minutes while monitoring detections real-time on the provided screen.

Have seen it live in Thailand, it was impressive! :)

In a recent call with researchers in Thailand. They have mentioned that they use passive chip readers to log data about chipped animals as they pass by.  

See full post
discussion

Mirror images - to annotate or not?

We are pushing hard to start training our custom model for the PolarBearWatchdog! soon.This includes lots of dataset curation and annotation work.A question that has come up is...

18 0

I made a few rotation experiements with MD5b.

Here is the original image (1152x2048) :

When saving this as copy in photoshop, the confidence on the mirror image changes slightly:

and when just cropping to a (1152*1152) square it changes quite a bit: 

The mirror image confidence drops below my chosen threshold of 0.2 but the non-mirrored image now gets a confidence boost.

Something must be going on with overall scaling under the hood in MD as the targets here have the exact same number of pixels. 

I tried resizing to 640x640:

640x640

This bumped the mirror image confidence back over 0.2... but lowered the non-mirrored confidence a bit... huh!?

My original hypothesis was that the confidence could be somewhat swapped just by turning the image upside down (180 degree rotation):

Here is the 1152x1152 crop rotated 180 degrees:

The mirror part now got a higher confidence but it is interpreted as sub-part of a larger organism. The non-mirrored polar bear had a drop in confidence.

So my hypothesis was somewhat confirmed...

This leads me to believe that MD is not trained on many upside down animals .... 

 

Seems like we should include some rotations in our image augmentations as the real world can be seen a bit tilted - as this cropped corner view from our fisheye at the zoo shows.

See full post
discussion

Conservation Data Strategist?

Hello everyone – long time lurker, first time poster...I’m the founder of a recently funded tech startup working on an exciting venture that bridges consumer technology and...

9 1

Great resources being shared! Darwin Core is a commonly used bio-data standard as well.  

For bioacoustic data, there are some metadata standards (GUANO is used by pretty much all the terrestrial ARU manufacturers). Some use Tethys as well.

Recordings are typically recorded as .WAV files but many store them as .flac (a type of lossless compression) to save on space. 

For ethics, usually acoustic data platforms with a public-facing component (e.g., Arbimon, WildTrax, etc.) will mask presence/absence geographical data for species listed on the IUCN RedList, CITES, etc. so that you're not giving away geographical information on where a species is to someone who would use it to go hunt them for example. 

 

Hello, I am experienced in conservation data strategy. If you want to have a conversation you can reach me at SustainNorth@gmail.com.

See full post
Link

High-resolution sensors and deep learning models for tree resource monitoring

The paper looks at technology advances for vegetation cover changes monitoring. For example, computer vision methods to infer 3D parameters via contextual learning from optical images.

1
discussion

MegaDetector V6 and Pytorch-Wildlife V1.1.0 !

Pytorch-Wildlife V1.1.0Hello everyone, we are happy to announce our release of Pytorch-Wildlife V1.1.0. In this release we have many new features including MegaDetectorV6, HerdNet...

5 4

Hello Patrick, thanks for asking! We are currently working on a bioacoustics module and will be releasing some time early next year. Maybe we can have some of your models in our initial bioacoustics model zoo, or if you don't have a model yet but have annotated datasets, we can probably train some models together? Let me know what you think! 

Thank you so much! We are also working on a bounding box based aerial animal detection model. Hopefully will release sometime early next year as well. It would be great to see how the model runs on your aerial datasets! We will keep you posted! 

Hi Zhongqi! We are finalizing our modelling work over the next couple of weeks and can make our work availabile for your team. Our objective is to create small (<500k parameters) quantized models that can run on low-power ARM processors. We have custom hardware that we built around them and will be deploying back in Patagonia in March 2025. Would be happy to chat further if interested!

We have an active submission to Nature Scientific Data with the annotated dataset. Once that gets approved (should be sometime soon), I can send you the figshare link to the repo. 

See full post
discussion

Mass Detection of Wildlife Snares Using Airborne Synthetic Radar

Mass Detection of Wildlife Snares Using Airborne Synthetic RadarFor the last year my colleauges Prof. Mike Inggs (Radar - Electrical Engineering, Unviversity of Cape Town) and...

42 18

 Is thisvfunding grant an opportunity? https://www.dronedeploy.com/blog/expand-your-impact-with-a-grant-from-dronedeploy

Hi David, this is an incredible project. Would you be interested in sharing more of your experience with AI and wildlife conservation with my students? They are currently researching this, and would greatly benefit from speaking with a professional in the field. Thank you for considering!

Hats off to your team for this absolute game-changing technology! 

We rescue stray and wild animals in Taiwan, and the bulk of our work is saving animals maimed by wire snares and gin traps. We've become better at finding the devices, but still not good at all. There's simply too much difficult terrain to cover and we only have eyeballs and hiking sticks to find them. We know roughly where they are because the maimed stray dogs will eventually find their way onto a road and be reported to us. Then we close one of them, set up a trail camera, get the evidence of the poacher in the act of re-setting it, and get him prosecuted and shut down. But we need to be able to scale this greatly.

I've been using a thermal-imaging drone to locate stricken animals and am now considering buying another drone more suited to finding traps and snares. Some newer drones are able to navigate through forest without crashing into thin branches, so I've been looking into equipping one with LiDAR to see if that can detect the devices. But then I came across your YouTube channel and then this post about using airborne synthetic radar, and I'm incredibly excited to see where you might take this incredible technology.

How can we get our hands on the SAR you're using? It's 3 kg, right? I'm wondering if I could fit it to a suitable drone. If it works above forest canopy to detect traps and snares on the forest floor, then I can use a load-carrying drone instead of a light obstacle-avoidance drone.

If you made the SAR yourselves, then maybe think about crowdfunding for your project. I'd happily pledge funds if it meant I could get my hands on the kind of equipment you're using.

I can't tell you how happy I am thinking about all the animals' lives you'll save with this. Don't just remove the snares—gather evidence and put the poachers out of business too!

See full post
discussion

Instant Detect 2.0 and related cost

I am doing a research project on rhino poaching at Kruger National Park. I was impressed with the idea of Instant Detect 2.0. I do not know the cost involved with installing that...

6 0

Sam  any update on Instant Detect 2.0 - previously you mentioned that you hope to go into volume production by mid-2024?

I would love to also see a comparison between Instant Detect 2.0 and Conservationxlabs' Sentinel products if anyone has done comparisons.

Are there any other similar solutions currently on the market - specifically with the images over LoRa capability, and camera to satellite solution?

There's quite a few diy or prototype solutions described online and in literature - but it seems none of these have made it to market yet as generally available fully usable products. We can only hope. 

See full post
discussion

AI Animal Identification Models

Hi Everyone,I've recently joined WILDLABS and I'm getting to know the different groups. I hope I've selected the right ones for this discussion...I am interested in AI to identify...

25 4

I trained the model tonight. Much better performance! mAP has gone from 73.8% to 82.0% and running my own images through anecdotally it is behaving better.

After data augmentation (horizontal flip, 30% crop, 10° rotation) my image count went from 1194 total images to 2864 images. I trained for 80 epochs.

inference results

Very nice!

I thought it would retain all the classes of YOLO (person, bike, etc) but it doesn't. This makes a lot of sense as to why it is seeing everything as a moose right now!

I had the same idea. ChatGPT says there should be possibilities though... 

You may want to explore more "aggressive" augmentations like the ones possible with 

to boost your sample size. 

Or you could expand the sample size by combining with some of the annotated datasets available at LILA BC like:

Cheers,

Lars

As others have said, pretty much all image models at least start with general-subject datasets ("car," "bird," "person", etc.) and have to be refined to work with more precision ("deer," "antelope"). A necessity for such refinement is a decent amount of labeled data ("a decent amount" has become smaller every year, but still has to cover the range of angles, lighting, and obstructions, etc.). The particular architecture (yolo, imagenet, etc.) has some effect on accuracy, amount of training data, etc., but is not crucial to choose early; you'll develop a pipeline that allows you to retrain with different parameters and (if you are careful) different architectures.

You can browse many available datasets and models at huggingface.co 

You mention edge, so it's worth mentioning that different architectures have different memory requirements and then, within the architecture, there will generally be a series of models ranging from lower to higher memory requirements. You might want to use a larger model during initial development (since you will see success faster) but don't do it too long or you might misestimate performance on an edge-appropriate model.

In terms of edge devices, there is a very real capacity-to-cost trade-off. Arduinos are very inexpensive, but are very underpowered even relative to, e.g., Raspberry Pis. The next step are small dedicated coprocessors such as the Coral line (). Then you have the Jetson line from NVidia, which are extremely capable, but are priced more towards manufacturing/industrial use. 

See full post
article

Using AI, barriers and bridges to help stop wildlife-vehicle collisions

Wildlife on roads creates a significant hazard in rural areas, to humans and animals alike. Low-tech prevention methods such as overpasses give great results, but they are expensive and can’t cover every scenario. Now...

2 0
See full post
discussion

What AI object detectors really see

After processing millions of images and examing countless false positives a number of these provide interesting insights as to what computer vision is really learning. The...

4 0

ChatGPT tells me: 

While YOLO has advanced capabilities in object detection by integrating both global outlines and textures, it still faces challenges when these visual cues are compromised. The model's performance can degrade in scenarios where objects are presented in abstract forms lacking clear outlines or distinctive textures.

Very interesting Lars. Clearly there are some fun experiments we can do. The comment above is interesting, it suggests that there might be quite some merit in the detail enhancement algorithms that some thermal cores can do. More testing to do :) My personal feeling is that the detail that it eventually learns is a lot more basic than one things. That being the case there are likely a lot of simple ways to fool it, but now I've been challenged to make and test some examples I see.

I am certainly a somewhat vertical thing with a lump on top, so I get it...

See full post
discussion

Training datasets for counting Grey Seals in UAV imagery

Does anyone have training datasets for (grey seal) object detection in UAV images, or be suggest where I might be able to find some? 

2 0

Hi Sarah!

This paper mentions seals (not sure if they are grey seal though) 

https://www.mdpi.com/2076-3417/13/18/10397

The dataset is listed here: 

and can be found here:

Cheers,

See full post
discussion

PolarBearWatchdog! Advancing Arctic Safety with an AI-driven Polar Bear Detection System

Together with Kim Hendrikse and collaborators with polar bear footage, I am developing a camera based polar bear alarm: PolarBearWatch. ...

1 6
See full post
Link

pyeogpr - Python based library that uses Earth Observation data to retrieve biophysical maps using Gaussian Process Regression

The library enables to derive biophyiscal maps with a few lines of code (only requirements: region of interest, temporal domain and satellite sensor). All processing runs in the cloud using the openEO API and hence no need to download any imagery.

1