Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

AI triggered camera trap - monkeys

I'd like to share the first monkeys captured by AI from far away in Lajuma research center. In co-operation with Nico Lubcker, the curator at Lajuma Research center South Africa....

7 0

This is chatgpt's answer:

"Here’s a rough idea of the number of parameters in different types of models commonly used in Edge Impulse:

  1. Neural Networks for Classification (e.g., MLP, CNN)
    • Small, shallow models: These can contain as few as 10,000 to 50,000 parameters when dealing with simple tasks like basic image recognition or sensor data classification.
    • Medium complexity models: For more advanced tasks, such as keyword spotting or more detailed image recognition, models may contain 50,000 to 200,000 parameters.
    • Larger models (Edge Optimized CNNs): For tasks like more complex object detection, models can go up to 200,000 to 500,000 parameters while still being optimized for edge devices.
  2. Recurrent Neural Networks (RNN, LSTM)
    • These are often used for time-series data like accelerometer or audio signals. A typical RNN model for edge deployment might have 10,000 to 100,000 parameters, depending on the length and complexity of the sequences being analyzed.
  3. Transfer Learning Models (e.g., MobileNet, EfficientNet variants)
    • Transfer learning models like MobileNet, when adapted for edge deployment, are typically pruned and quantized to reduce their size. These models may contain anywhere from 1 million to 3 million parametersin their full form, but when trimmed for edge deployment, they could be reduced to 100,000 to 1 million parameters.
  4. Classical Machine Learning Models (e.g., Decision Trees, KNN)
    • These models usually don’t have parameters in the same way as deep learning models, but they are lightweight and typically optimized for embedded use. They might only use a few kilobytes of memory or less, depending on the dataset size and number of features.

Overall, the model sizes in Edge Impulse are kept small to run efficiently on microcontrollers with limited memory (usually in the range of 256 KB to 1 MB of RAM and a few megabytes of flash). The number of parameters is carefully balanced to ensure good accuracy while maintaining small memory footprints for edge deployment."

Disclaimer: I can't guarantee that all of the above is not a hallucination.

So, not even 1 million parameters. yolov3 tiny model back in the day was 6 million parameters. It's my understanding that the idea behind edge impulse was that it's retrained up at the edge itself. Training requires a lot of memory than inference so by it's nature the models would be further constrained by this.

It's important to be aware of what your model is (How advanced it is), what size it is and how many parameters it contains when you consider your use case to manage your expectations. Later I will post you a few of the terrrible false positives that even the very best and largest models have made to give you an idea of just what AI is actually learning and by extension not learning. So if the best of the best can make terrible mistakes, what will small early models do?

In general though, a large advanced model can do much better at inference with a relatively small number of annotated training images than a really small model with potentially thousands and many more images.

For example, I decided way back in 2019 that yolov3 tiny model was absolutely useless for security because of the boy who cried wolf effect. 
 

The biggest use case I see for very small models would be as a hopefully better replacement to using a PIR sensor. I say hopefully, because both PIR sensors and very small models can have de-generate environmental conditions that cause large number of false positives. At a certain threshold, any perceived gain in power usage may disappear.

I welcome edge impulse to correct anything above that I may have said that was untrue as I personally have never used edge impulse, but I was involved in a project whereby yolov6 was pitted against and compared against a system using edge impulse.

Very impressive Kim! It actually took me a minute to find them in the first picture! Are they baboons (if they aren't then this is definitely proof that I shouldn't creat any training datasets!)? Can't wait for the thermal tests!!

Cheers,

Rob

Yes baboons. But this area has a few different primate species. In the weekend three species were present.

Yes, the thermal ones will be spectacular. I wondering what other species will jump into the picture when we start with that. The 640x512 thermal camera I'm testing is really amazing. Now I'm onto testing a 1280x1024 resolution thermal camera. That's going to be super interesting! As soon as I have some nice images I'll post them.

See full post
discussion

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

5 5

Hi Brett,this ocean lab sychelles is great to hear from you! I love your remote monitoring buoy project, especially the recycling of DFAD buoys and the design of custom-built ones that are full of interesting features. The expandability of the navigation buoys enabling it to be configured as hydrophones, cameras, or water quality monitors is a great feature. The technical marvels, such as recycling DFADs' circuits, making your own hydrophone, and designing a super-efficient system, are terrific. Moreover, it is encouraging to witness the deployment of your early system, particularly the live video and water quality data you have collected. You will definitely make a positive impact on the Wildlabs community with your knowledge of electrical engineering and software development. I care to hear more of your discoveries and any news that you will be sharing. Keep doing what you excel at!

Thank you for sharing! 

Hi Brett,

 

I am interested in developing really low cost multi channel underwater acoustic recorders.  Can you tell me a bit more about the board and stuff you were using for yours? You can get me at chris@wilcoanalytics.org.

 

Thanks,

 

Chris

See full post
discussion

Deepfaune v1.2!

 We have just released Deepfaune v1.2!! Deepfaune is a freely available software allowing to automatically classify species in camera-traps pictures or videos collected in...

2 4

Edit: SOLVED Thanks!

Thank you so much for this awesome work! I was trying to load the v2 model the same way as in classifTools.py:
model = timm.create_model(backbone="vit_large_patch14_dinov2.lvd142m", pretrained=False, num_classes=len(class_names))

ckpt = torch.load(weight_path='deepfaune-vit_large_patch14_dinov2.lvd142m.v2.pt', map_location=device)

state_dict = ckpt['state_dict']

new_state_dict = {k.replace('base_model.', ''): v for k, v in state_dict.items()}

model.load_state_dict(new_state_dict)

but it fails with this error:
Error(s) in loading state_dict for VisionTransformer:
       size mismatch for head.weight: copying a param with shape torch.Size([30, 1024]) from checkpoint, the shape in current model is torch.Size([26, 1024]).
       size mismatch for head.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([26]).
 

Are you using a different backbone for v2? I tried BACKBONE= '"vit_large_patch14_dinov2.lvd142m.v2" but that also doesn't work.

For the record now that this is here:

This error typically occurs when the wrong number of classes is given to timm.create_model. 
You should try to specify num_classes=30 manually (for this v1.2, number can change in future versions as we add new species).

Also, for issues please do reach out (as Jennifer did) by email, we will be much more responsive. More general questions/discussions can be asked here, I will reply asap. 

See full post
discussion

Your thoughts on this camera

Dear camera traps community,I hope this is the right place to share, and I believe this topic will be of interest to you.I’d like to introduce you to Tikee (edit...

7 1
See full post
discussion

Integrating Satellite Data with on-the-ground conservation tech

Hi All - I'm new to WILDLABS. I'm Seamus, I work at Planet on our impact and sustainability team, focusing on applications of our satellite data to biodiversity. We've seen...

2 5

Hi, Seamus—welcome to WILDLABS! 

I work on building ground and sea-truthing technologies for satellite remote sensing related to conservation tech. For example, we have used data collected from Smartfins (thermometers on surfboard fins) to validate Landsat TIRS thermal data and other low-cost tech such as mini- and sensing Secchi disks to evaluate satellite ocean color data. I don't work in conservation tech in the strictest sense (more accurate to say coastal water resources/resilience), but of course it's related, so thought I'd chime in. 

https://aslopubs.onlinelibrary.wiley.com/doi/full/10.1002/lom3.10624https://www.sciencedirect.com/science/article/pii/S0272771421004996

Sounds like a great position at Planet, and an important pursuit—good luck, and I hope our paths cross!

Hi Phil, 

Thanks for sharing - this is really interesting work! 

Seems like a cool integration of ground-based (or ocean-based) data and satellite data, with a community science aspect as well. I look forward to giving this paper a full read. 

Thanks, 
Seamus

 

See full post
discussion

setting up a network of cameras connected to a server via WIFI

We need to set up a wildlife monitoring network based on camera traps in Doñana National Park, Spain (see alsowildlifeobservatory.org).  We are interested in setting...

13 0

Cool thread!

I will be testing Reolink Wi-Fi cameras in combination with solar powered TP-Link long range Wi-Fi antennas/repeaters later this field season for monitoring arctic fox dens at our remote off grid site in Greenland. The long range Wi-Fi antennas are rather power hungry but with sufficient solar panel and battery capacity I am hopeful it will work. 
I am looking forward to explore the links and hints above after the field season. 
Cheers,

Dear team,

I am new to WildLabs. It's great to find all the interesting and informative discussions.

We are exploring deploying "connected" camera traps in wetlands (2 hectares for the pilot, 300 hectares for the next phase) and our goals are:
- Set up the camera traps to upload images (via FTP or HTTP POST) to cloud storage every 5 minutes
- Analyze the images using AI to remove all false positives (images without animals)
- Deploy off-the-shelf camera traps that support WiFi (preferred) or 4G cellular

Questions:
a. Any suggested camera traps that support WiFi (preferred) or 4G cellular?
b. What WiFi infrastructure is recommened to connect camera traps at 200m to 500m distances?
c. Does anyone have experience using MANET (mobile ad-hoc network)?

Cheers,
Andre

 

See full post
discussion

Ideas for easy/fast maintenance of arboreal camera traps 

Hi ,A section of my upcoming project will include the deployment of arboreal camera traps up large fruiting trees In primary rainforest of PNG. It would be ideal if these camera...

11 1

Hi Ben,

I would be interested to see if the Instant Detect 2.0 camera system might be useful for this.

The cameras can transmit thumbnails of the captured images using LoRa radio to a Base Station. You could then see all the captured images at this Base Station, as well as the camera's battery and memory information (device health). In addition, you could also change camera settings from the Base Station so you would not need to reclimb the trees to change from PIR sensitivity high to medium for instance.

The Instant Detect 2.0 cameras also have an external power port so a cable could be run to the ground to a DC 12V battery for long term power.

If you wanted to, you could also connect the Base Station to the Cloud using satellite connectivity, so that you can monitor the whole system remotely, but this will incur more cost and power usage of the Base Station.

I'd be keen to hear your thoughts,

Thanks,

Sam 

I'm curious to know if anyone has reliably connected to one of these wifi-enabled cameras set in the rainforest canopy. In some areas, the understory or epiphytes growing on the tree are dense enough to block the signal. This is one reason that we've stuck to climbing into the canopy to check cameras.

I've deployed and trained others to climb and deploy arboreal cameras in several countries and I agree with Lucy's approach. Leaving a thin cord (like a throw line) to quicken setting a rope in the future is extremely helpful. In rainforest areas that have a lot of curious animals and arboreal insects, visually inspecting your camera trap is important. I agree that climbing into the canopy takes effort, but if you are in the tree yourself, you can control the camera placement, the direction it is facing, its field of view, and prune small branches that may trigger the camera. It would be interesting to have a method to lower a camera from a tree, but you would need a lot of extra gear to install this system for surveys with 15+ camera traps.

See full post
discussion

Camera trap function in heat waves

Camera trap functionality during heat waves:Hello my fellow camera trappers! Our cameras have suffered this summer during southern Oregon's heat waves. We have had a couple south-...

5 0

Hi Karen. 

It's actually possible that it might not necessarily be the high heat that is causing problems with the batteries. Lithium should be quite durable for the temperatures experienced inside the enclosure. Instead, the problem could likely be condensation. In an enclosure, if the sun shines directly on a sealed enclosure, the enclosure can internally heat up, evaporating any moisture in it and raising the dew point. If the external temperature then cools, the internal enclosure air that comes into contact with the cooler walls of the enclosure can form condensation droplets. If you've ever left a glass jar in the sun, you'd see this phenomenon. 

The problem is that if any condensation gets on the internal camera circuit board, its possible that it can potentially conduct enough current to increase the discharge rate of the batteries or in some cases, cause the whole device to go down. We've seen this happen in some of the field devices we deploy and have since protected critical circuitry against condensation.   

To avoid this situation, I'd recommend trying to keep your camera traps out of direct sunlight and putting a lot of dessicant inside before sealing them. Not sure if this solves your problem, but hopefully it might help. 

This is good advice. Most lithium batteries should operate fine within a -10c to 60c temp range.

Personally I've only ever encountered issues from cold weather conditions.

Non rechargeable batteries tend to have even wider operating temperatures. 

LFP Rechargeable batteries also have better temp ranges.

Word of caution regarding dessicant packs is make sure they aren't touching PCBs. We've seen cases where the packs absorbed moisture but then became damp enough to short out components.

Could also be a bad batch of batteries. Make sure to pre charge them if using rechargeable batteries and measure them while they are connected to a load. Bad batteries will often measure fine voltage wise when not connected to anything but once a load is introduced their voltages drop significantly more than good batteries.

 

Hi Karen,

Yeah the discharge curves of lithium cells tend to be very stable for a long time and then drop sharply at the end. Akiba and Brett's advice below re: condensation prevention is another great recommendation and could well be the root of your problem. Let us know if you have any questions or if you do any testing etc., as this sort of information is gold!

All the best,

Rob

See full post
discussion

Funding for Camera Trap Projects

Hello everyone; I'm a current Peace Corps volunteer serving in South America and wanted to start a camera trap program. I am working with a local nonprofit. This idea would use...

12 0

Given that I am a Peace Corps volunteer and research is the secondary, though still guiding objective of my work, I was able to apply for grants from private companies such as the Mitsubishi corporation - the only grant I used to directly, though others may be applicable.  

I was lucky enough to leverage my relationships with nonprofits that I work with to obtain camera traps. I have 7 total traps - 4 of which were donated by the national park agency of the country I'm working with, and 3 that i bought with my own money. I was in the process of obtaining another 6 from an environmental nonprofit but due to politics within the national park agency that was not a viable option. For the study area i had planned (20,000 hectares) I can cover the area I want but over a longer period. The more camera traps I would have, the faster the study would be completed, but there was a rush on my end. 

I have posted grants that I found in my process of searching below: 

GRANTS

https://ptes.org/grants/apply-grant/worldwide-grant-criteria/2024: PTES will be accepting new applications for Conservation Insight Grants in 2024. The deadline for applications is Sunday 25th August at 23.59 (UK time). Please ensure that all reference letters are sent by the referees by the same deadline. The applications will be assessed at a meeting in mid-October and applicants will be notified whether they have been successful or not by the end of October. Projects should start no earlier than January 2025.
 

https://ideawild.org/application/ - IDEA WILD awards small equipment grants.

We receive equipment requests daily, 50 -70 requests/month, and prioritize them based on conservation impact, recipient need, and project location. Additional consideration is given to projects in areas identified as critical biodiversity hotspots. Our grants serve areas where support is needed most and advance biological research, conservation education, community outreach, conservation management, field training and professional development.

https://www.neotropicalbirdclub.org/conservation/conservation-fund/ - Grants of $1,500 and $3,000 (US Dollars) are available subject to certain conditions. Grants are occasionally awarded for higher amounts (up to $5000). Please contact the Club (see below) to confirm the availability and criteria for higher amount awards.
 

https://swbg-conservationfund.org/en/grant-seekers/ 

https://www.speciesconservation.org/grants/ - DEADLINE FEB 29TH  - The Mohamed bin Zayed Species Conservation Fund provides financial support in the form of small grants of less than $25,000 to conservation projects globally. These small grants are as much for the species as they are for the conservationists and organizations working so passionately to protect them.

https://whitleyaward.org/ - You might be able to apply for this and connect it to our project  - The Whitley Awards provide funding, training and profile to mid-career conservationists who lead grassroots projects in the Global South, benefitting wildlife, habitats and people. Whitley Award winners receive £40,000 and join an active network of 200 alumni across 80 countries and successful projects can go on to receive long-term Continuation Funding, as well as WFN’s top prize – the Whitley Gold Award – worth £100,000.

 https://www.wildlifeacoustics.com/grant-program - Wildlife Acoustics may award up to $12,000 worth of eligible products to selected grant recipients worldwide every quarter. Individual applicants may request products and software licenses totaling up to $4,000 quarterly. We will highlight grant recipients and their projects on our website and social media platforms.

https://zaa.org/conservation_grants - The Zoological Association of America (ZAA) offers grants up to $5,000 to support the conservation of endangered species and their habitats globally. Proposals may include field programs, studies, and multidisciplinary approaches to species conservation, habitat preservation, and biodiversity enhancement. The grants are available to principal investigators associated with recognized institutions like accredited zoos, academic institutions, conservation, or non-profit organizations. Proposals can be submitted electronically from January 1, 2024, to April 15, 2024, with decisions made by June 14, 2024

Hej Vasilios, thank you so much for this list with notes! Much appreciated. Thank you also for sharing your experiences with the Mitsubishi corporation, your non-profit and other non-profits. They are very enlightening! Hopefully also for other grantseekers! 

See full post
discussion

Camera traps for nesting birds

Hi All,We have a team that wants to monitor bird nests with camera traps. They have piloted the Blazevideo 24MP*1296 Pixel video cameras with no night vision and a maximum...

2 2

Hi @Chelsea_Smith 

I am not familiar with the Blazevideo cameras - any chance you have a model number? All I can find are models with nightvision (e.g. A323). I personally like the Solaris 'Weapon', although the trigger time might be slower than the Blazevideo (at least going by what was mentioned in the A323 spec). I'd be interested to hear if the trigger time actually is that fast for the model that was tested? Also, how close are the cameras going to be to the nests? Some trail cameras aren't so great at focal distances closer than a metre or two...

Re: security, the most common approach is to use a steel security box (e.g. choose from the drop-down menu on this page for the A323 security box). For the Solaris, email: alex@solaristrailcameras.com to see if they have something suitable. Most of these types of boxes have cutouts or something for a python security cable to pass through so you can secure the box (and camera) to something like a pole. You could also consider camoflauging

More than happy to discuss further.

Cheers,

Rob

 

See full post
discussion

Camera traps for bird bands

Hi all, What a wonderful forum I've come across during my research!  I am a zookeeper currently working on a project involving remote weights of some of our bird...

16 5

Hello Steven, I read you comment and I am interested in knowing more about your device that allow to measure weight of a bird when it stays on a perch. I would like to use a similar approach to take record of weight and corresponding pictures of ringed birds (waders). it is possible to have details or suggestions for a DIY project? Thank you 

Im a biologist up in Alaska and have a couple of options that might work for you. We have a time lapse camera we have developed for 'peering' into the nest cup of shorebirds and passerines (focus is set at about 6 inches with a 150 deg lens) and a small mammal camera is designed to sit on a 3 gallon bucket and look downward (about a 11 in focus). I'm curious how these might perform in this scenario so am happy to send a couple for you to try out. Please shoot me an email at my work email address if you are interested: christopher_latty@fws.gov

See full post
Link

AI Based Animal Detection Demo App for iPhone

In this post, I describe an animal recognition demonstration app I developed for the iPhone. The “MegaDetector-Demo” app uses the latest “MegaDetector” animal detector model from PyTorch-Wildlife to identify animals, people and vehicles in a live video feed from the iPhone...

0
discussion

Camera traps

Just thinking out loud. Is there a camera trap that can be set up in the tree in such a way that it takes pictures in front and behind the tree, like it has two lenses; one in...

6 1

Support for two cameras (or more) can be done by using two USB powered cameras if you have the right platform. Then in principle you can place each camera up to 2m from the processor.

I’ll will be experimenting with such a setup as soon as my USB global shutter camera arrives. Global shutter camera work really well with the AI computer vision that my system uses. Quite a different power category than microcontrollers though.


Currently I’m testing a Raspberry Pi based camera system that is capturing images and video, scanning 3x cameras with ai triggering.

See full post
discussion

Rapid Camera Trap labeling with ChatGPT

Hi all, I've published a blog post on rapidly labeling camera trap data using ChatGPT for species identification and a simple object detection model to get the bounding boxes....

3 6

super interesting, thank you for sharing! definitely will watch!

Really interesting, I will take a look

See full post
discussion

Integrating AI models with camera trap management applications

Hi All, As part of extending the work we are doing at the BearID Project, we are thinking about integrating the models we are developing into open source camera trap project. This...

33 5

Hi All,

For an update on this topic, check out my article on LinkedIn about bringing BearID to Ecuador. We already have some researchers there using EcoAssist (thank you @pvlun for adding the Peru Amazon models!) and are working on a "biodiversity server" running TRAPPER on an Ampere server at the Universidad San Francisco de Quito (USFQ)!

See full post
discussion

Effectiveness of Camera Traps on Estimating Small Mammals Home Range. 

Hi Everyone,I would like to know how effective camera traps can be in studying the home range of small mammals in fragmented habitats. I would appreciate to get some...

2 1

 

Hello Adventina,

I think camera traps may not be very useful for studying home range of small mammals, as they might appear too small on the camera to be able to identify distinct individuals. However, I'm not entirely sure. Below is a link that I believe will be helpful.

 

See full post
discussion

Move BON Development: Follow up discussion

Hey Biologging Community! We just launched a new initiative to mobilize animal tracking data in support of national and global scale conservation goals (learn more here!). If you...

6 6

Hi Talia! 

I feel like the topic is so broad that it might help to put some constraints around things, see what works, and then broaden those out. I have a lot of ideas regarding the data monitoring and collection side based on the other sensor and observation networks we've set up in the past. 

There may also be some potential scope to incorporate things like data collection and integrated monitoring to the Build Your Own Datalogger series where the system is updated to feed data into the observation network. 

It'd probably take a bit of discussion and coordination. Let me know if interested. I'm fine to jump on a call or discuss via email too.

@cmwainaina please take a look

See full post
Link

Rapid Camera Trap labeling with ChatGPT

Hi all, I've published a blog post on rapidly labeling camera trap data using ChatGPT for species identification and a simple object detection model to get the bounding boxes. While presented in Edge Impulse, this approach can be replicated outside the platform using Python...

1
discussion

Rat detection, review advice

Hi everyone, I'm new to this site! A bit of info about my camera trapping project: I deployed a combo of Reconyx Hyperfire and Browning cameras on ~300 Brown Booby nests in...

4 1

I would separate your question into two, re: (a) platforms and (b) AI.

Re: platforms... 

One of the advantages (the main advantage?) of working with a cloud-based system is that the storage and data management issues are handled for you.  So if a cloud-based system is on the menu for you, check out any of the cloud-based systems I list here (Wildlife Insights, Agouti, TrapTagger, TRAPPER, WildTrax, Animl, etc.).

But cloud-based systems are not a panacea, and there are lots of reasons folks choose to work locally.  Often working locally is preferred due to bandwidth, data provenance, or being absolutely sure that your images are archived somewhere you can still access them in five years, but also in general when it comes to pushing lots of images in front of your eyes quickly, nothing will beat a local tool, and among local tools, Timelapse is still the most widely used AFAIK and the fastest for most users.  I'll get to this below, but AI might just not help you in this case, so my best recommendation is to invest in really getting to know all of Timelapse's bells and whistles, especially the "quick paste" tool, and all of the keyboard shortcuts.  If you're using the mouse to navigate, you've lost the battle already, and 30 minutes learning keyboard shortcuts is worth a week of tinkering with AI.

FWIW in general, no matter what system you use, you'll have to do something with .csv files in R, and overall I wouldn't expect that to be more or less difficult depending on the system you choose.

Re: AI...

@carlybatist is correct, I loooooove difficult MegaDetector cases.  Based on your description, it may be the case that small rodents are too difficult, but don't give up hope quite yet; I have a bag of tricks I try in these more difficult cases.  I will reach out by email to ask you for some images, and if any of the tricks work, I will post back to this thread.

It also couldn't hurt to try a couple other systems in parallel, just to see what happens, but I'm not optimistic that any other systems will work if "vanilla MegaDetector" doesn't work, in part because the systems that I think are most relevant themselves depend on MegaDetector.  But FWIW, the two systems I would try - even if cloud-based systems are not on the menu for you - are Wildlife Insights and Animl.  There are other systems out there as per above; I mention Animl specifically because it has grown up around small mammal monitoring on islands, so, maybe a better-than-average shot of doing something reasonable in your case.

IMO the best path to pursue is "Dan's bag of MegaDetector tricks", but I want to set expectations: small nocturnal rodents remain difficult for AI and the odds are not in our favor.

I'm being quite daring by making statements on a forum that is frequented by Dan Morris (obviously in deference to his extensive knowledge on the subject :-) ), but here goes. Likely the version of megadetector that you used was using YoloV5. YoloV5 came out a similar time to YoloV4, which from memory was around end 2019 time frame. Again from memory I think the performance of Yolov5 was similar to that of YoloV4. The versions that came out after this, (Somewhat in order) Scaled Yolov4, Yolor and then yolov7, yolov6 (Yolov8 somewhere in there) are much better at matching smaller objects and in darker conditions. Dramatically so.

PyTorch Wildlife will be releasing a beta of a version that will be using YoloV9. This could possibly help with your problem.

Co-incidentally, I'm playing with megadetector now for detecting and recording mice. This was from this morning:

Mouse withy PyTorch Wildlife V5

See full post
discussion

Background subtraction to improve camera trap AI

Hello All,In my work I use a lot camera traps to record videos of wildlife.It's time consuming select the videos with wildlife, I use to have hundreds of videos where nothing...

4 1

Great paper. Have you tried filtering with a neural network as an alternative to a threshold on the minimum size in step (5)? My thought is that in many cases there is foreground vegetation that will appear relatively large. If, though, one trained a DNN on a sufficient number of labeled masks it’s reasonable to expect it to be at least a decent discriminator. 

With ML the problem is always getting the labeled data, but you could potentially bootstrap a DNN with the current output of your (5). That would surely share the same size bias as your current solution, but it would be interesting as a firs step just to see what happens to your false positive rate. The second step would, I think, probably be creating synthetic data with scaled-up vegetation and/or scaled-down wildlife to fine-tune the NN to respond to features of all scales. 

We evaluated background subtraction as part of our pipeline for Zamba Cloud a few years ago. It's going to depend a lot on your backgrounds, but we found that for jungle/forest scenes in videos there was too much movement/noise in the backgrounds for the usual methods to add substantial benefit to the accuracy of our classifications.

To classify frames, we run a very fast distilled version of megadetector to select frames from videos that are likely to have animals and then classify using a CNN. We found this to be much more computationally efficient with respect to accuracy gain versus RCNNs or some dual stream approaches that try to account for movement with something like optical flow. 

See full post
discussion

Camera trap model help

Hello,I am working on a project to monitor passerines on a remote uninhabited island using camera traps. The cameras would be visited once per year. I've been using Bushnell and...

2 0

I’m familiar with at least three cameras that may fit your requirements.  The list below also contains links to my overview and teardowns of these cameras.  

Browning Elite HP5 series 

Bushnell Core DS 4K (low glow IR flash).  

GardePro T5WF.  

Per your requirements: 

-programmable specific time to operate. For example, I would like to only trigger videos between 6-11am and 4-6pm.  : All cameras almost support this.  They have a single “operation window” .  I.e. you could set the camera to operate 6 am to 6 pm; (or 4 pm to 11am)

-solar enabled (either internal or with external panels): Each of these cameras has an external power supply input which can be used with a 12V Solar panel.  Given the humid conditions in the sub-tropics, you will want to keep moisture out of this connector when the solar panel is attached.  The connectors in the Browning and Gardepro are not water tight, but you could use a dab of silicone adhesive around the connector to seal it.  The Bushnell has a genuinely waterproof connector, but you'll need to get the Bushnell solar adapter.  In any case, I would do something to keep heavy rain from falling directly on the camera.  A steel security box will do the trick; as will a little “diverter roof”.  

-lower number of batteries required (12 like a Reconyx is too many): The Browning and GuardePro use 8xAA batteries; the Bushnell 6xAA.  

-support higher capacity SD cards (ie. 512 mb): They all support 512 GB cards.   

-screen or bluetooth connection to phone to see image easily while setting up: Oops – none of these cameras support this feature.  The Browning (2") and Bushnell (1.5") models have screens that you can preview the scene in as placed, but these are awkward to view when the camera is set low, and without getting your head in the way.  The GP also has a preview screen, but to view it, you have to swing open the whole case, which is not the way it’s deployed.  A hack we do to aim cameras is to bring along a point-and-shoot camera (or smart phone), place it over the the lens of the trail camera, snap a picture, view, and repeat til the set is perfect.  

-ability to replace batteries and SD card without removing camera from a tripod: You can service the SD cards in all of these cameras with tripod attached. The Bushnell and GP batteries can be serviced while still attached to tripod.  The Browning requires that you remove the camera from the tripod to get at the batteries.

Some additional factors: 

In our experience, the Brownings have the best image quality.  If you use the Browning Elite HP5, check out my firmware feature additions which include a fix for a bug which can sometimes corrupt high speed/high capacity SD cards.  The bug is rare, but it does happen on some camera/SD card pairs, and would be especially painful to encounter in a long term deployment.  

It doesn’t look like you are interested in any night-time captures, so I guess the flash type doesn’t matter.  Also, there is effectively no advantage of the second (night time) image sensor in the Bushnell.  

For such a long term deployment, I'd recommend narrowing your choices to a couple/few, and buying a full set of each to test in something like your full deployment. 

Hope this helps. 

 

I was going to suggest the Garde Pro too. The model available in Australia is good. I have found that the download from the SD card to the linked phone can take a while, so be prepared to sit around while it happens.

See full post