Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

Thesis Collaboration

Hello everyone, I am an experienced Data Scientist and I am currently studying a second master in Environment Management (ULB - Belgium). I am currently looking for a master...

3 0

Hi Simon,
Did you already contact INBO? Both biologging and citizen science are big themes at INBO. Last year we had a master thesis on camera trapping invasive muntjac. You can send me a private message for more info!

Hi Simon,

We're a biologging start-up based in Antwerp and are definitely open to collaborate if you're interested. We've got some programs going on with local zoo's. Feel free to send me a DM if you'd like to know more.

Hi Simon,
We (Reneco International Wildlife Consultants) have an ongoing collaboration with a local University (Abu Dhabi, UAE)  for developing AI tools (cameratrap/drone images and video analyses) and biomimetic robots applied to conservation (e.g  https://www.sciencedirect.com/science/article/pii/S1574954124004813 ). We also have a genetic team working on eDNA.    Field experience could be possible, in UAE or Morocco.
Feel free to write me back if you may be interested and would like to know more
 

See full post
discussion

State of the art thermal imaging core and the zoo

First of all I would like to extend my thanks and gratitude to Minke Geense from Gaiazoo, Limburg, The Netherlands for making this test possible.I recently obtained a couple of...

10 2

I would also be interested - looking at starting a project that need observation of large african animals with nocturnal habits... Holy grail with unlimited funding would be a grid of 100's of cameras :-)

@HeinrichS there’s still time for you or anyone else to make a funding submission to the wildlabs 2025 grants ❤️❤️❤️

I haven't applied for wildlabs funding, but I would love for others to apply that want to use my systems. My preference goes to those who want to use the most units :-)

See full post
discussion

Rat story

A video speak a thousand words. I thought I’d share some fun observing the local wildlife in the back yard in thermal. 

1
See full post
discussion

Canopy access or tree climbing resources for arboreal research

Hi everyone.I am a professional tree climbing/canopy access trainer and work in conservation tech. I've climbed or trained climbers in several countries to deploy arboreal...

5 5

Hi all! Folks may be interested in the Cornell Tree Climbing program that is a part of Cornell Outdoor Education. Not only does Cornell offer training, and have a bunch of online resources, but they have also facilitated groups of scientists to collect canopy samples and data. 

Hi Dominique,

Thanks for your responses and congratulations on getting trained! 

I can see that speaking directly with a climbing professional could be the most beneficial because what climbing methods and equipment you may need will depend very much on individual project goals and budgets. Did you end up speaking with your trainers about your field research goals and what climbing methods may be best for you? 

Hi Mark, thanks for responding. I think you've identified one of the most difficult parts of research climbing: maintaining your climbing skills and knowledge between field sessions. 

My husband is an experienced arborist and practices his skills almost daily. I am not an arborist, so I schedule climbing time to keep my abilities fresh and my husband is there to assist. But I know it's difficult for individual researchers to practice on their own and they should only be climbing alone if experienced and not in a remote area.

However, it's possible to train researchers to safely climb in the field for multiple field sessions. My husband and I trained a group of climbers in Cameroon in January, 2024. The goal was to train four climbers who would go into the remote rainforest for several weeks and set up camera traps. They would deploy and retrieve arboreal cameras at different survey locations over two years. We needed to train the teams to operate safely and independently (without an instructor present) in very remote areas. 

To train them sufficiently, my husband and I spent 1 month in Cameroon with the field team. We did a few days of basic training at a location near the city and then went with them on their initial camera deployment where we continued field training for 2.5 - 3 weeks. Before going to Cameroon, we had several discussions to determine what climbing method and equipment would best meet their data collection goals and were appropriate for their field site and conditions. We taught them basic rescue scenarios. We also set a climbing practice schedule for the team to maintain their climbing and rescue skills. We strongly emphasized to their management that the field team needed access to the climbing gear and needed to follow the practice schedule. Since the training, the team successfully finished two other camera trap surveys and is planning their third.

This was a lot of up-front training and cost. However, these climbers are now operating on their own and can continue operating for the foreseeable future. I think a big reason is receiving extensive training, tailored to their needs. General tree-climbing courses are great for learning the basics, but they'll never be a substitute for in-field, tailored training.

See full post
discussion

What the mice get up to at night

Last night I set up the following rig to see if there were any mice in the kitchen at night. It consists of my StalkedByTheState smart camera trap system and a 640x512 resolution...

1 2

And I see now they can walk vertically up walls like Spider-Man.


Walking up walls

See full post
discussion

Joint ecoacoustic & camera-trapping project in Indonesia!

WildMon is super excited to be partnering with Planet Indonesia on a wildlife monitoring project combining bioacoustics and camera trapping in Kalimantan, Indonesia! In combining...

11 6

Hello Carly, 

Congratulations for this project!

I am studying right now a second MA in Environment Management. I would like to do my MA thesis project about these technologies (bioacoustics and camera traps). I wonder if you would be interested in a collaboration with me on that ? 

I already have a professional experience so I think that my contribution could be interesting for you. 

Thank you in advance for answering, 

Simon

See full post
discussion

Pytorch Wildlife v5 detection in thermal

Hope everyone had a nice Christmas. I'd like to share my experiment with Pytorch Wildlife V5 and a high res (640x512) thermal core, connected to a Raspberry Pi 5. The thermal...

1 2

I added plain old motion detection because megadetector v5 was not working well with the smaller rat images and in thermal.

This works really well:
 

Rat in thermal
 

Also, I can see really easily where the rat comes into the shed, see this series:

Before

Just visible

 

Just visible



Clearly visible.
 

Clearly visible

So now I have a way to build up a create thermal dataset for training :-)
 

See full post
discussion

Solar powered Rasyberry Pi 5

Hi,I would like to run a Raspberry Pi 5 off a battery in a mode where it will boot up and run for just 1 1/2 minutes every hour (To take some snapshots).Does anyone have any...

7 0

Lifepo4แบตเตอรี่ BMS 1S 3.2V 12A ชาร์จสั้นวงจรป้องกัน PCM  การกู้คืนด้วยตนเองสำหรับมอเตอร์ไฟฟ้า/LED Light - AliExpress

Actually, you can use any BMS as long as it supports 1S, and that will be sufficient. However, I recommend connecting the batteries in parallel to increase the capacity and using just one BMS. Everything should run through the power output of the BMS.

That said, don’t rely too much on BMS units from China, as they might not perform well in the long run. I suggest getting a high-quality one or keeping a spare on hand.

 

Don't they all come from China ?

The one in the picture I can find on aliexpress for 2.53 euros.

I'm not sure how you would get one that doesn't come from China.

In any case, I know what to search for now, that's very helpful. Thank you.

What is the 1S thing you mention above ?

If you have any links as to what you consider a high quality one that would be great!

Actually, you can source the product from anywhere, but I’m not very confident in the quality of products from China. That doesn’t mean products from China are bad—it might just be my bad luck. However, trust me, they can work just as well; you just need to ensure proper protection.

For 1S, it refers to the number of battery cells supported. A 1S configuration for LiFePO4 means a single battery cell with 3.6V. Of course, you must not confuse LiFePO4 with Li-Ion BMS, as the two types are not interchangeable and cannot be used together.

***However, you can use a power bank board designed for Li-Ion to draw power from the battery for use. But you must not charge the LiFePO4 battery directly without passing through a BMS; otherwise, your battery could be damaged.***

BMS 

Battery 

powerbank board

See full post
discussion

Christmas wish list

If anyone has something like this on their Christmas wish list let me know. It's my new camera product, a Raspberry Pi 5 running in secure boot mode with an encrypted nvme...

5 0

Yes, I have these. I'm pretty sure that my company is the first company in the world to sell products running on Raspberry Pi 5 in secure boot mode as well :-)

I responded to your wish list. From your description a modified camera like a HIKvision turret might be able to do it for you.

great, this security cameras might be interesting for monitoring crop development and maybe other bigger pests like boars or some other herbivorous animals that could eventually go into the fields, but this is not what I'm trying to solve now. What I also have interest in is on the go weeds species monitoring, like on a robot, to create a high resolution map of weed infestation in a field (10-20 hectares). There eventually such modified camera could make sense. Thanks for the ideas!!

See full post
discussion

Mirror images - to annotate or not?

We are pushing hard to start training our custom model for the PolarBearWatchdog! soon.This includes lots of dataset curation and annotation work.A question that has come up is...

18 0

I made a few rotation experiements with MD5b.

Here is the original image (1152x2048) :

When saving this as copy in photoshop, the confidence on the mirror image changes slightly:

and when just cropping to a (1152*1152) square it changes quite a bit: 

The mirror image confidence drops below my chosen threshold of 0.2 but the non-mirrored image now gets a confidence boost.

Something must be going on with overall scaling under the hood in MD as the targets here have the exact same number of pixels. 

I tried resizing to 640x640:

640x640

This bumped the mirror image confidence back over 0.2... but lowered the non-mirrored confidence a bit... huh!?

My original hypothesis was that the confidence could be somewhat swapped just by turning the image upside down (180 degree rotation):

Here is the 1152x1152 crop rotated 180 degrees:

The mirror part now got a higher confidence but it is interpreted as sub-part of a larger organism. The non-mirrored polar bear had a drop in confidence.

So my hypothesis was somewhat confirmed...

This leads me to believe that MD is not trained on many upside down animals .... 

 

Seems like we should include some rotations in our image augmentations as the real world can be seen a bit tilted - as this cropped corner view from our fisheye at the zoo shows.

See full post
discussion

Recommended lora receiver capable microcontrollers

If one wanted to build a microcontroller for receiving lora packets to go hand in hand with a lora point to point transmitter does anyone have any they can recommend?Any for the...

22 0

@ioanF , you can do a little bit better than 10 ma. I have here an adalogger rp2040 feather with DS3231 RTC wing and a I2S Mems microphone. During "dormant" mode running from xosc and waiting for DS3231 wakeup I get 4.7 ma . This includes about 1 ma for  microphone and DS323 together. OK, that is still 3 ma higher than 0.7 ma RP2040 documentation is said to claim. I guess, there is some uncertainty with  the AT35 USB tester I'm using. Putting the LDO enable pin (EN) to ground, the USB tester said 1 ma, which may be dominated by the offset of the tester as the LIPO charger as remaining component should only consume 0.2 ma (max).

Edit (for the records): After 'disabling' all GPIO but the RTC wakeup pin, the system (RP2040+DS3231 RTC+ I2S Mems microphone) consumes only 1.8 mA. I guess I`m now close to the clamed dormant power consumption!

A good candidate for low power hibernation and processing power is the Teensy 4.1 from PJRC, which is an ARM Cortex M7. standard clock is 600 MHz and there is provision for 16 MB PSRAM. It hibernates at 0.1 ma. What is more important than the specs, there is a very active (the best IMHO) community Search | Teensy Forum with direct connection to the developer (Paul Stoffregen). For a simple recorder consumption is 50% higher than RP2040 (Teensy running at 24 MHz and RP2040 running at 48 MHz, but RP2040 is M0+ and not a M7). 

Thanks! The Teensys are nice for processing power if choosing an external Lora board I’d say that’s a good choice. I started with teensies, there was a well supported code base and certainly well priced.

My preference is to one with onboard Lora. I’ve used one one onboard murata Lora before. It was good both for low power operation and it’s Lora operation.

For a transmitter the grasshopper if it’s still being made is quite good for small distances are being made because it has an onboard ceramic antennae. Which is good for about three houses away, although one was also received > 20 km away.

See full post
discussion

Individual Identification of Snow Leopard

Hello all, My Name is Raza Muhammad. I am currently working on the conservation of Snow Leopard in Pakistan, I have a lot of camera trap data. Please guide me on how to use AI to...

8 1

Hi Raza,

I manage a lot of snow leopard camera trap data also (I work for the Snow Leopard Trust). I am not aware of any AI solutions for snow leopards that can do all the work for us, but there are 'human-in-the-loop' approaches, in which AI suggests possible matches and a human makes the actual decision. Among these, are Hotspotter and the algorithm in WildMe. 

You can make use of Hotspotter in the software TrapTagger. I have found this software to be very useful when identifying snow leopards. Anecdotally, I think it improves the accuracy of snow leopard identifications. But, like I said, you still have to manually make the decisions; the results from Hotspotter are just a helpful guide.

The other cutting edge solutions mentioned here (e.g. MegaDescriptor, linked above) will require a massive dataset of labelled individuals. And considerable expertise in Python to develop your own solution. I had a quick look at the paper at they were getting around 60-70% accuracy for leopards, which is a much easier species than snow leopard. So I don't think this approach is useful, at least for now. Unless I've misunderstood something (others who deeply understand this, please chime in and correct me!).

Incidentally, I did try to gain access to WildMe / Whiskerbook last year but wasn't successful gaining an account. @holmbergius can you help me out? That would be appreciated, thanks!

Best of luck Raza, let me know if I can help more,

Ollie

 

An example of Hotspotter doing quite a good job, even with blurry images, to successfully draw attention to matching parts of the body. This is a screenshot from TrapTagger, which very helpfully runs Hotspotter across your images.

Note that I wouldn't necessarily recommend ID'ing snow leopards from such poor imagery, but this just demonstrates that Hotspotter can sometimes do quite well even on harder images.

Side-by-side comparison of two snow leopard camera trap images, showing coloured areas that the Hotspotter algorithm thinks are matching. The images are in black and white and the patterns are quite blurred due to the movement of the snow leopards.

 

 

Hi Raza,

As @ollie_wearn suggests, if think traptagger will be the easiest for you: 

You just have to follow the tutorial there:

The person in charge of Traptagger is also very responsive.

best,

See full post
discussion

GreenCrossingAI Project Update

Hello Wildlabs community! My name is Shawn Johnson and I am a research assistant for Dr. Karen Mager and Dr. Bernie Boscoe here at Southern Oregon University located in Ashland,...

3
See full post
discussion

MegaDetector V6 and Pytorch-Wildlife V1.1.0 !

Pytorch-Wildlife V1.1.0Hello everyone, we are happy to announce our release of Pytorch-Wildlife V1.1.0. In this release we have many new features including MegaDetectorV6, HerdNet...

5 4

Hello Patrick, thanks for asking! We are currently working on a bioacoustics module and will be releasing some time early next year. Maybe we can have some of your models in our initial bioacoustics model zoo, or if you don't have a model yet but have annotated datasets, we can probably train some models together? Let me know what you think! 

Thank you so much! We are also working on a bounding box based aerial animal detection model. Hopefully will release sometime early next year as well. It would be great to see how the model runs on your aerial datasets! We will keep you posted! 

Hi Zhongqi! We are finalizing our modelling work over the next couple of weeks and can make our work availabile for your team. Our objective is to create small (<500k parameters) quantized models that can run on low-power ARM processors. We have custom hardware that we built around them and will be deploying back in Patagonia in March 2025. Would be happy to chat further if interested!

We have an active submission to Nature Scientific Data with the annotated dataset. Once that gets approved (should be sometime soon), I can send you the figshare link to the repo. 

See full post
discussion

Instant Detect 2.0 and related cost

I am doing a research project on rhino poaching at Kruger National Park. I was impressed with the idea of Instant Detect 2.0. I do not know the cost involved with installing that...

6 0

Sam  any update on Instant Detect 2.0 - previously you mentioned that you hope to go into volume production by mid-2024?

I would love to also see a comparison between Instant Detect 2.0 and Conservationxlabs' Sentinel products if anyone has done comparisons.

Are there any other similar solutions currently on the market - specifically with the images over LoRa capability, and camera to satellite solution?

There's quite a few diy or prototype solutions described online and in literature - but it seems none of these have made it to market yet as generally available fully usable products. We can only hope. 

See full post
discussion

Inquiry About e-con Systems/arducam Cameras for Camera Trapping Projects

Hi all,I’m looking into using e-con Systems / Arducam cameras for my camera trapping research and wanted to get feedback from anyone who has experience with these models....

1 0

I think the big thing is power consumption. Commercial camera traps have a large power (current) dynamic range. That means they can often swing from ~0.1 mA to ~1000 mA of current within a few milliseconds. It's often difficult to replicate that in DIY systems which is why you don't see a lot of Raspberry Pi camera traps. The power consumption is often too high and the boot time is too long. 

One of the big challenges is powering down the system so that it's essentially in sleep mode and having it wake up in less than a second. That said, if you're mainly doing time lapse or don't have the strict speed requirements to wake up that quickly, it may make sense to roll your own camera trap.

Anyways, hope I'm not being too discouraging. It never hurts to give it a shot and please feed back your experiences to the forum. I'd love to hear reviews about Arducam and it's my first time hearing about e-con Systems. 

Akiba

See full post
discussion

AI Animal Identification Models

Hi Everyone,I've recently joined WILDLABS and I'm getting to know the different groups. I hope I've selected the right ones for this discussion...I am interested in AI to identify...

25 4

I trained the model tonight. Much better performance! mAP has gone from 73.8% to 82.0% and running my own images through anecdotally it is behaving better.

After data augmentation (horizontal flip, 30% crop, 10° rotation) my image count went from 1194 total images to 2864 images. I trained for 80 epochs.

inference results

Very nice!

I thought it would retain all the classes of YOLO (person, bike, etc) but it doesn't. This makes a lot of sense as to why it is seeing everything as a moose right now!

I had the same idea. ChatGPT says there should be possibilities though... 

You may want to explore more "aggressive" augmentations like the ones possible with 

to boost your sample size. 

Or you could expand the sample size by combining with some of the annotated datasets available at LILA BC like:

Cheers,

Lars

As others have said, pretty much all image models at least start with general-subject datasets ("car," "bird," "person", etc.) and have to be refined to work with more precision ("deer," "antelope"). A necessity for such refinement is a decent amount of labeled data ("a decent amount" has become smaller every year, but still has to cover the range of angles, lighting, and obstructions, etc.). The particular architecture (yolo, imagenet, etc.) has some effect on accuracy, amount of training data, etc., but is not crucial to choose early; you'll develop a pipeline that allows you to retrain with different parameters and (if you are careful) different architectures.

You can browse many available datasets and models at huggingface.co 

You mention edge, so it's worth mentioning that different architectures have different memory requirements and then, within the architecture, there will generally be a series of models ranging from lower to higher memory requirements. You might want to use a larger model during initial development (since you will see success faster) but don't do it too long or you might misestimate performance on an edge-appropriate model.

In terms of edge devices, there is a very real capacity-to-cost trade-off. Arduinos are very inexpensive, but are very underpowered even relative to, e.g., Raspberry Pis. The next step are small dedicated coprocessors such as the Coral line (). Then you have the Jetson line from NVidia, which are extremely capable, but are priced more towards manufacturing/industrial use. 

See full post
discussion

Discussing an Open Source Camera Trap Project

Hi everyone.  This conversation took place in the Sensors thread and I'm moving it over here since it's more relevant in the camera trap thread.  [Alasdair...

20 0
See full post
discussion

Southwest Florida - Trail Cam

Hi everyoneMy first post on this site - not sure if this is the proper group.I've placed a few trail cams in Florida. After 3 months and several birds I got this one....

1
See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

10 1

I just noticed that TrapTagger has integrated AI reading of timestamps for videos. I haven't had a chance to try it out yet, but it sounds promising. 

Small update. I uploaded >1000 videos from a spypoint flex camera and TrapTagger worked really well. Another program that I'm currently interested in is Timelapse which uses the file creation date/time. I haven't yet tried it, but it looks promising as well. 

Hi Lucy,



I now realised it is an old thread and you most likely have already found a solution long ago but this might be of interest to others.

As mentioned previously, it is definitely much better to take moon phase from the date and location. While moon phase in general is not a good proxy for illumination, that moon phase symbol on the video is even worse as it generalises the moon cycle into a few discreet categories. For calculating moon phase you can use suncalc package in R but if you want a deeper look and more detailed proxy for moonlight intensity, I wrote a paper on it 

 with accompanying R package called moonlit 

When it comes to temperature I also agree that what is recorded in the camera is often very inconsistent so unless you have multiple cameras to average your measurements you are probably better off using something like NCEP/NCAR Reanalysis (again, there is an R package for that) but if you insist on extracting temperature from the picture, I tried it using tesseract and wrote a description here: 

 

Good luck!

See full post