Camera traps have been a key part of the conservation toolkit for decades. Remotely triggered video or still cameras allow researchers and managers to monitor cryptic species, survey populations, and support enforcement responses by documenting illegal activities. Increasingly, machine learning is being implemented to automate the processing of data generated by camera traps.
A recent study published showed that, despite being well-established and widely used tools in conservation, progress in the development of camera traps has plateaued since the emergence of the modern model in the mid-2000s, leaving users struggling with many of the same issues they faced a decade ago. That manufacturer ratings have not improved over time, despite technological advancements, demonstrates the need for a new generation of innovative conservation camera traps. Join this group and explore existing efforts, established needs, and what next-generation camera traps might look like - including the integration of AI for data processing through initiatives like Wildlife Insights and Wild Me.
Group Highlights:
Our past Tech Tutors seasons featured multiple episodes for experienced and new camera trappers. How Do I Repair My Camera Traps? featured WILDLABS members Laure Joanny, Alistair Stewart, and Rob Appleby and featured many troubleshooting and DIY resources for common issues.
For camera trap users looking to incorporate machine learning into the data analysis process, Sara Beery's How do I get started using machine learning for my camera traps? is an incredible resource discussing the user-friendly tool MegaDetector.
And for those who are new to camera trapping, Marcella Kelly's How do I choose the right camera trap(s) based on interests, goals, and species? will help you make important decisions based on factors like species, environment, power, durability, and more.
Finally, for an in-depth conversation on camera trap hardware and software, check out the Camera Traps Virtual Meetup featuring Sara Beery, Roland Kays, and Sam Seccombe.
And while you're here, be sure to stop by the camera trap community's collaborative troubleshooting data bank, where we're compiling common problems with the goal of creating a consistent place to exchange tips and tricks!
Header photo: Stephanie O'Donnell
No showcases have been added to this group yet.
- @Freaklabs
- | He/Him
Freaklabs
I'm an engineer and product designer working in conservation technology. I specialize in technology for landscape restoration and wildlife behavioral ecology.



- 1 Resources
- 307 Discussions
- 20 Groups
- @amatsika
- | Her/she
A biodiversity conservation researcher with immense interest in agroecology, alien and invasive species, human wildlife conflict, species conservation, ecosystems sustenance and restoration.
- 0 Resources
- 0 Discussions
- 2 Groups
African Parks
- 0 Resources
- 0 Discussions
- 10 Groups
I am a PhD student in Korea, and I am interested in self-adaptive software, human-in-the-loop systems, and AI modeling.
- 0 Resources
- 0 Discussions
- 2 Groups
Ecological and Spatial Data Scientist
- 2 Resources
- 0 Discussions
- 4 Groups
Coach, consultant, volunteer, and donor



- 0 Resources
- 58 Discussions
- 12 Groups
Soil science, droughts effect of poverty, human rights advocate, wage equality in Europe, Drones in Digital Education




- 4 Resources
- 11 Discussions
- 18 Groups
I'm a young agronomist/expert in the planning and management of protected areas at the start of my career in the conservation of terrestrial flora and fauna. I have good experience in plant management and production. My passion for biodiversity conservation has led me to acquire
- 0 Resources
- 0 Discussions
- 3 Groups
- @KC
- | she/her
- 0 Resources
- 0 Discussions
- 8 Groups
- @Cteodorski
- | He/Him
Tech guy turned conservation nerd. Prototyping LoRa + solar sensor systems to help monitor wildlife where Wi-Fi and roads don’t reach.
- 0 Resources
- 2 Discussions
- 5 Groups
Worked as a mechanical engineer for a defence co, then software engineer, then for a research lab specialising in underwater robotics.



- 1 Resources
- 144 Discussions
- 16 Groups
Conservation biologist in the Daintree World Heritage lowland rainforest. We operate a small but well equipped field research station at Cape Tribulation, Far North Queensland. So many projects, so few people!
- 0 Resources
- 1 Discussions
- 5 Groups
Read about the advice provided by AI specialists in AI Conservation Office Hours 2025 earlier this year and reflect on how this helped projects so far.
6 August 2025
If you're a Post-Doctoral Fellow, a PhD student, or a member of the research staff interested in applying your computational skills to support active research publications, please read on to learn about the Cross-...
5 August 2025
New opportunity to work with moths, camera light traps and citizen science in Germany.
5 August 2025
Proud of sharing our paper introducing our underwater camera trap, a solution for automating the production of underwater images and videos of amphibians, reptiles, invertebrates, mammals...
2 July 2025
I put together some initial experiences deploying the new SpeciesNet classifier on 37,000 images from a Namibian camera trap dataset and hope that sharing initial impressions might be helpful to others.
23 April 2025
A nice resource that addresses the data interoperability challenge from the GBIF.
4 April 2025
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective!
1 April 2025
The FLIR ONE thermal camera is a compact and portable thermal imaging device capable of detecting heat signatures in diverse environments. This report explores its application in locating wild animals across large areas...
27 March 2025
WWF's Arctic Community Wildlife Grants program supports conservation, stewardship, and research initiatives that focus on coastal Arctic ecology, community sustainability, and priority Arctic wildlife, including polar...
7 March 2025
The Smithsonian’s National Zoo and Conservation Biology Institute (SNZCBI) is seeking an intern to assist with multiple projects related to conservation technology for wildlife monitoring. SNZCBI scientists collect data...
3 March 2025
Article
NewtCAM is an underwater camera trap. Devices are getting deployed worldwide in the frame of the CAMPHIBIAN project and thanks to the support of our kind early users. Here an outcome from the UK.
24 February 2025
Osa Conservation is launching our inaugural cohort of the ‘Susan Wojcicki Research Fellowship’ for 2025, worth up to $15,000 per awardee (award value dependent on project length and number of awards given each year)....
10 February 2025
August 2025
event
event
September 2025
event
event
event
February 2025
event
January 2025
October 2024
event
58 Products
Recently updated products
4 Products
Recently updated products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Hi Mark, thanks for responding. I think you've identified one of the most difficult parts of research climbing: maintaining your climbing skills and knowledge between field... |
|
Community Base, Camera Traps, Conservation Tech Training and Education, Early Career | 7 months ago | |
And I see now they can walk vertically up walls like Spider-Man. |
|
Camera Traps, AI for Conservation | 7 months 1 week ago | |
Hello Carly, Congratulations for this project!I am studying right now a second MA in Environment Management. I would like to do my MA thesis project about these technologies... |
+6
|
Acoustics, Camera Traps | 7 months 1 week ago | |
I added plain old motion detection because megadetector v5 was not working well with the smaller rat images and in thermal.This works really well: Also, I can see... |
|
AI for Conservation, Camera Traps | 7 months 2 weeks ago | |
Actually, you can source the product from anywhere, but I’m not very confident in the quality of products from China. That doesn’t mean products from China are bad—it might just... |
|
Camera Traps | 7 months 3 weeks ago | |
great, this security cameras might be interesting for monitoring crop development and maybe other bigger pests like boars or some other herbivorous animals that could eventually... |
|
AI for Conservation, Camera Traps | 7 months 4 weeks ago | |
Seems like we should include some rotations in our image augmentations as the real world can be seen a bit tilted - as this cropped corner view from our fisheye at the zoo shows. |
+13
|
AI for Conservation, Camera Traps, Data management and processing tools | 8 months ago | |
Thanks! The Teensys are nice for processing power if choosing an external Lora board I’d say that’s a good choice. I started with teensies, there was a well supported code base... |
|
Camera Traps, Sensors | 8 months 1 week ago | |
Hi Raza,As @ollie_wearn suggests, if think traptagger will be the easiest for you: You just have to follow the tutorial there:The person in charge of Traptagger is also very... |
+3
|
Camera Traps | 8 months 1 week ago | |
Hello Wildlabs community! My name is Shawn Johnson and I am a research assistant for Dr. Karen Mager and Dr. Bernie Boscoe here at Southern... |
|
Camera Traps | 8 months 3 weeks ago | |
Hi Zhongqi! We are finalizing our modelling work over the next couple of weeks and can make our work availabile for your team. Our objective is to create small (<500k... |
|
AI for Conservation, Camera Traps, Software Development | 8 months 3 weeks ago | |
There's quite a few diy or prototype solutions described online and in literature - but it seems none of these have made it to market yet as generally available fully usable... |
|
AI for Conservation, Camera Traps, Human-Wildlife Coexistence, Sensors | 8 months 3 weeks ago |
Canopy access or tree climbing resources for arboreal research
12 September 2024 8:51pm
9 January 2025 10:50pm
Hi Dominique,
Thanks for your responses and congratulations on getting trained!
I can see that speaking directly with a climbing professional could be the most beneficial because what climbing methods and equipment you may need will depend very much on individual project goals and budgets. Did you end up speaking with your trainers about your field research goals and what climbing methods may be best for you?
9 January 2025 11:27pm
Hi Mark, thanks for responding. I think you've identified one of the most difficult parts of research climbing: maintaining your climbing skills and knowledge between field sessions.
My husband is an experienced arborist and practices his skills almost daily. I am not an arborist, so I schedule climbing time to keep my abilities fresh and my husband is there to assist. But I know it's difficult for individual researchers to practice on their own and they should only be climbing alone if experienced and not in a remote area.
However, it's possible to train researchers to safely climb in the field for multiple field sessions. My husband and I trained a group of climbers in Cameroon in January, 2024. The goal was to train four climbers who would go into the remote rainforest for several weeks and set up camera traps. They would deploy and retrieve arboreal cameras at different survey locations over two years. We needed to train the teams to operate safely and independently (without an instructor present) in very remote areas.
To train them sufficiently, my husband and I spent 1 month in Cameroon with the field team. We did a few days of basic training at a location near the city and then went with them on their initial camera deployment where we continued field training for 2.5 - 3 weeks. Before going to Cameroon, we had several discussions to determine what climbing method and equipment would best meet their data collection goals and were appropriate for their field site and conditions. We taught them basic rescue scenarios. We also set a climbing practice schedule for the team to maintain their climbing and rescue skills. We strongly emphasized to their management that the field team needed access to the climbing gear and needed to follow the practice schedule. Since the training, the team successfully finished two other camera trap surveys and is planning their third.
This was a lot of up-front training and cost. However, these climbers are now operating on their own and can continue operating for the foreseeable future. I think a big reason is receiving extensive training, tailored to their needs. General tree-climbing courses are great for learning the basics, but they'll never be a substitute for in-field, tailored training.
What the mice get up to at night
6 January 2025 8:06am
7 January 2025 1:09pm
And I see now they can walk vertically up walls like Spider-Man.
Joint ecoacoustic & camera-trapping project in Indonesia!
1 August 2024 5:29pm
9 December 2024 3:41am
Awesome Carly, thanks. Yes helps a lot. Those all sound like big improvements over the hardware we're currently working with.
13 December 2024 7:42pm
Hi Carly,
That would be great! Send me a message and we'll put something together after the holidays.
4 January 2025 5:27pm
Hello Carly,
Congratulations for this project!
I am studying right now a second MA in Environment Management. I would like to do my MA thesis project about these technologies (bioacoustics and camera traps). I wonder if you would be interested in a collaboration with me on that ?
I already have a professional experience so I think that my contribution could be interesting for you.
Thank you in advance for answering,
Simon
Pytorch Wildlife v5 detection in thermal
28 December 2024 10:35am
29 December 2024 8:08pm
I added plain old motion detection because megadetector v5 was not working well with the smaller rat images and in thermal.
This works really well:
Also, I can see really easily where the rat comes into the shed, see this series:
Just visible
Clearly visible.
So now I have a way to build up a create thermal dataset for training :-)
Solar powered Rasyberry Pi 5
19 December 2024 11:40am
20 December 2024 2:37pm
Actually, you can use any BMS as long as it supports 1S, and that will be sufficient. However, I recommend connecting the batteries in parallel to increase the capacity and using just one BMS. Everything should run through the power output of the BMS.
That said, don’t rely too much on BMS units from China, as they might not perform well in the long run. I suggest getting a high-quality one or keeping a spare on hand.
20 December 2024 4:45pm
Don't they all come from China ?
The one in the picture I can find on aliexpress for 2.53 euros.
I'm not sure how you would get one that doesn't come from China.
In any case, I know what to search for now, that's very helpful. Thank you.
What is the 1S thing you mention above ?
If you have any links as to what you consider a high quality one that would be great!
21 December 2024 1:20am
Actually, you can source the product from anywhere, but I’m not very confident in the quality of products from China. That doesn’t mean products from China are bad—it might just be my bad luck. However, trust me, they can work just as well; you just need to ensure proper protection.
For 1S, it refers to the number of battery cells supported. A 1S configuration for LiFePO4 means a single battery cell with 3.6V. Of course, you must not confuse LiFePO4 with Li-Ion BMS, as the two types are not interchangeable and cannot be used together.
***However, you can use a power bank board designed for Li-Ion to draw power from the battery for use. But you must not charge the LiFePO4 battery directly without passing through a BMS; otherwise, your battery could be damaged.***
Announcement of Project SPARROW
18 December 2024 8:01pm
3 January 2025 6:48pm
Postdoc on camera trapping, remote sensing, and AI for wildlife studies
18 December 2024 2:14pm
Christmas wish list
16 December 2024 11:22am
16 December 2024 1:25pm
Oh there we go, you are selling this setup? Really cool!!!
Great Idea, just posted my requirements here:
lets see if I can find my dreamsolution.
16 December 2024 1:33pm
Yes, I have these. I'm pretty sure that my company is the first company in the world to sell products running on Raspberry Pi 5 in secure boot mode as well :-)
I responded to your wish list. From your description a modified camera like a HIKvision turret might be able to do it for you.
16 December 2024 2:24pm
great, this security cameras might be interesting for monitoring crop development and maybe other bigger pests like boars or some other herbivorous animals that could eventually go into the fields, but this is not what I'm trying to solve now. What I also have interest in is on the go weeds species monitoring, like on a robot, to create a high resolution map of weed infestation in a field (10-20 hectares). There eventually such modified camera could make sense. Thanks for the ideas!!
Postdoc on camera trapping, remote sensing, and AI for wildlife studies
16 December 2024 9:52am
Mirror images - to annotate or not?
5 December 2024 8:32pm
7 December 2024 3:18pm
I will send you a DM on LinkedIn and try to find a time to chat
8 December 2024 12:36pm
I made a few rotation experiements with MD5b.
Here is the original image (1152x2048) :
When saving this as copy in photoshop, the confidence on the mirror image changes slightly:
and when just cropping to a (1152*1152) square it changes quite a bit:
The mirror image confidence drops below my chosen threshold of 0.2 but the non-mirrored image now gets a confidence boost.
Something must be going on with overall scaling under the hood in MD as the targets here have the exact same number of pixels.
I tried resizing to 640x640:
This bumped the mirror image confidence back over 0.2... but lowered the non-mirrored confidence a bit... huh!?
My original hypothesis was that the confidence could be somewhat swapped just by turning the image upside down (180 degree rotation):
Here is the 1152x1152 crop rotated 180 degrees:
The mirror part now got a higher confidence but it is interpreted as sub-part of a larger organism. The non-mirrored polar bear had a drop in confidence.
So my hypothesis was somewhat confirmed...
This leads me to believe that MD is not trained on many upside down animals ....
- and probably our PolarbearWatchdog! should not be either ... ;)
9 December 2024 4:27pm
Seems like we should include some rotations in our image augmentations as the real world can be seen a bit tilted - as this cropped corner view from our fisheye at the zoo shows.

Recommended lora receiver capable microcontrollers
28 November 2024 8:40am
4 December 2024 4:21pm
@ioanF , you can do a little bit better than 10 ma. I have here an adalogger rp2040 feather with DS3231 RTC wing and a I2S Mems microphone. During "dormant" mode running from xosc and waiting for DS3231 wakeup I get 4.7 ma . This includes about 1 ma for microphone and DS323 together. OK, that is still 3 ma higher than 0.7 ma RP2040 documentation is said to claim. I guess, there is some uncertainty with the AT35 USB tester I'm using. Putting the LDO enable pin (EN) to ground, the USB tester said 1 ma, which may be dominated by the offset of the tester as the LIPO charger as remaining component should only consume 0.2 ma (max).
Edit (for the records): After 'disabling' all GPIO but the RTC wakeup pin, the system (RP2040+DS3231 RTC+ I2S Mems microphone) consumes only 1.8 mA. I guess I`m now close to the clamed dormant power consumption!
4 December 2024 8:24pm
A good candidate for low power hibernation and processing power is the Teensy 4.1 from PJRC, which is an ARM Cortex M7. standard clock is 600 MHz and there is provision for 16 MB PSRAM. It hibernates at 0.1 ma. What is more important than the specs, there is a very active (the best IMHO) community Search | Teensy Forum with direct connection to the developer (Paul Stoffregen). For a simple recorder consumption is 50% higher than RP2040 (Teensy running at 24 MHz and RP2040 running at 48 MHz, but RP2040 is M0+ and not a M7).
5 December 2024 6:08am
Thanks! The Teensys are nice for processing power if choosing an external Lora board I’d say that’s a good choice. I started with teensies, there was a well supported code base and certainly well priced.
My preference is to one with onboard Lora. I’ve used one one onboard murata Lora before. It was good both for low power operation and it’s Lora operation.
For a transmitter the grasshopper if it’s still being made is quite good for small distances are being made because it has an onboard ceramic antennae. Which is good for about three houses away, although one was also received > 20 km away.
Individual Identification of Snow Leopard
25 November 2024 5:39am
1 December 2024 3:00pm
Hi Raza,
I manage a lot of snow leopard camera trap data also (I work for the Snow Leopard Trust). I am not aware of any AI solutions for snow leopards that can do all the work for us, but there are 'human-in-the-loop' approaches, in which AI suggests possible matches and a human makes the actual decision. Among these, are Hotspotter and the algorithm in WildMe.
You can make use of Hotspotter in the software TrapTagger. I have found this software to be very useful when identifying snow leopards. Anecdotally, I think it improves the accuracy of snow leopard identifications. But, like I said, you still have to manually make the decisions; the results from Hotspotter are just a helpful guide.
The other cutting edge solutions mentioned here (e.g. MegaDescriptor, linked above) will require a massive dataset of labelled individuals. And considerable expertise in Python to develop your own solution. I had a quick look at the paper at they were getting around 60-70% accuracy for leopards, which is a much easier species than snow leopard. So I don't think this approach is useful, at least for now. Unless I've misunderstood something (others who deeply understand this, please chime in and correct me!).
Incidentally, I did try to gain access to WildMe / Whiskerbook last year but wasn't successful gaining an account. @holmbergius can you help me out? That would be appreciated, thanks!
Best of luck Raza, let me know if I can help more,
Ollie
1 December 2024 4:38pm
An example of Hotspotter doing quite a good job, even with blurry images, to successfully draw attention to matching parts of the body. This is a screenshot from TrapTagger, which very helpfully runs Hotspotter across your images.
Note that I wouldn't necessarily recommend ID'ing snow leopards from such poor imagery, but this just demonstrates that Hotspotter can sometimes do quite well even on harder images.

3 December 2024 11:03am
Hi Raza,
As @ollie_wearn suggests, if think traptagger will be the easiest for you:
You just have to follow the tutorial there:
TrapTagger Tutorial - YouTube
The full TrapTagger tutorial. It is recommended that you start here, and watch through all the videos in order. You can then revisit topics as needs be.
The person in charge of Traptagger is also very responsive.
best,
AmazonTEC: 4D Technology for Biodiversity Monitoring in the Amazon (English)
25 November 2024 8:33pm
GreenCrossingAI Project Update
22 November 2024 6:10pm
MegaDetector V6 and Pytorch-Wildlife V1.1.0 !
9 November 2024 3:34am
11 November 2024 7:57pm
Hello Patrick, thanks for asking! We are currently working on a bioacoustics module and will be releasing some time early next year. Maybe we can have some of your models in our initial bioacoustics model zoo, or if you don't have a model yet but have annotated datasets, we can probably train some models together? Let me know what you think!
11 November 2024 7:58pm
Thank you so much! We are also working on a bounding box based aerial animal detection model. Hopefully will release sometime early next year as well. It would be great to see how the model runs on your aerial datasets! We will keep you posted!
22 November 2024 5:13pm
Hi Zhongqi! We are finalizing our modelling work over the next couple of weeks and can make our work availabile for your team. Our objective is to create small (<500k parameters) quantized models that can run on low-power ARM processors. We have custom hardware that we built around them and will be deploying back in Patagonia in March 2025. Would be happy to chat further if interested!
We have an active submission to Nature Scientific Data with the annotated dataset. Once that gets approved (should be sometime soon), I can send you the figshare link to the repo.
Instant Detect 2.0 and related cost
16 November 2023 12:50am
11 November 2024 9:16am
Sam any update on Instant Detect 2.0 - previously you mentioned that you hope to go into volume production by mid-2024?
I would love to also see a comparison between Instant Detect 2.0 and Conservationxlabs' Sentinel products if anyone has done comparisons.
Are there any other similar solutions currently on the market - specifically with the images over LoRa capability, and camera to satellite solution?
11 November 2024 6:41pm
Nightjar comes to mind but I am not too sure if it is actually “on the market”…
19 November 2024 6:44pm
There's quite a few diy or prototype solutions described online and in literature - but it seems none of these have made it to market yet as generally available fully usable products. We can only hope.
Inquiry About e-con Systems/arducam Cameras for Camera Trapping Projects
19 November 2024 1:35pm
19 November 2024 2:44pm
I think the big thing is power consumption. Commercial camera traps have a large power (current) dynamic range. That means they can often swing from ~0.1 mA to ~1000 mA of current within a few milliseconds. It's often difficult to replicate that in DIY systems which is why you don't see a lot of Raspberry Pi camera traps. The power consumption is often too high and the boot time is too long.
One of the big challenges is powering down the system so that it's essentially in sleep mode and having it wake up in less than a second. That said, if you're mainly doing time lapse or don't have the strict speed requirements to wake up that quickly, it may make sense to roll your own camera trap.
Anyways, hope I'm not being too discouraging. It never hurts to give it a shot and please feed back your experiences to the forum. I'd love to hear reviews about Arducam and it's my first time hearing about e-con Systems.
Akiba
AI Animal Identification Models
30 March 2023 5:01am
6 November 2024 6:50am
I trained the model tonight. Much better performance! mAP has gone from 73.8% to 82.0% and running my own images through anecdotally it is behaving better.
After data augmentation (horizontal flip, 30% crop, 10° rotation) my image count went from 1194 total images to 2864 images. I trained for 80 epochs.
6 November 2024 8:56am
Very nice!
I thought it would retain all the classes of YOLO (person, bike, etc) but it doesn't. This makes a lot of sense as to why it is seeing everything as a moose right now!
I had the same idea. ChatGPT says there should be possibilities though...
You may want to explore more "aggressive" augmentations like the ones possible with
Albumentations Transforms
A list of Albumentations transforms
to boost your sample size.
Or you could expand the sample size by combining with some of the annotated datasets available at LILA BC like:
Caltech Camera Traps - LILA BC
This data set contains 244,497 images from 140 camera locations in the Southwestern United States, with species-level labels for 22 species, and approximately 66,000 bounding box annotations.
North American Camera Trap Images - LILA BC
This data set contains 3.7M camera trap images from five locations across the United States, with species-level labels for 28 species.
Cheers,
Lars
15 November 2024 5:30pm
As others have said, pretty much all image models at least start with general-subject datasets ("car," "bird," "person", etc.) and have to be refined to work with more precision ("deer," "antelope"). A necessity for such refinement is a decent amount of labeled data ("a decent amount" has become smaller every year, but still has to cover the range of angles, lighting, and obstructions, etc.). The particular architecture (yolo, imagenet, etc.) has some effect on accuracy, amount of training data, etc., but is not crucial to choose early; you'll develop a pipeline that allows you to retrain with different parameters and (if you are careful) different architectures.
You can browse many available datasets and models at huggingface.co
You mention edge, so it's worth mentioning that different architectures have different memory requirements and then, within the architecture, there will generally be a series of models ranging from lower to higher memory requirements. You might want to use a larger model during initial development (since you will see success faster) but don't do it too long or you might misestimate performance on an edge-appropriate model.
In terms of edge devices, there is a very real capacity-to-cost trade-off. Arduinos are very inexpensive, but are very underpowered even relative to, e.g., Raspberry Pis. The next step are small dedicated coprocessors such as the Coral line (). Then you have the Jetson line from NVidia, which are extremely capable, but are priced more towards manufacturing/industrial use.
Products | Coral
Helping you bring local AI to applications from prototype to production
Q&A - AI for Conservation Office Hours 2025

15 November 2024 11:20am
Apply Now: AI for Conservation Office Hours 2025

13 November 2024 11:31am
Looking for feedback & testers for Animal Detect platform
6 November 2024 9:39am
6 November 2024 9:57am
Hi Eugene!
Interesting project!
I already signed up to test it!
Cheers,
Lars
Discussing an Open Source Camera Trap Project
2 April 2019 2:49am
29 October 2024 8:15am
Regarding using the openMV as a basis for an allrounder camera, we just published this:
30 October 2024 1:41pm
That looks like an amazing project. Congratulations!
4 November 2024 8:47pm
Thanks :)
We are working on making it smaller and simpler using the latest openMV board.
Southwest Florida - Trail Cam
3 November 2024 5:48pm
Automatic extraction of temperature/moon phase from camera trap video
29 November 2023 1:15pm
7 September 2024 9:44am
I just noticed that TrapTagger has integrated AI reading of timestamps for videos. I haven't had a chance to try it out yet, but it sounds promising.
28 October 2024 7:30pm
Small update. I uploaded >1000 videos from a spypoint flex camera and TrapTagger worked really well. Another program that I'm currently interested in is Timelapse which uses the file creation date/time. I haven't yet tried it, but it looks promising as well.
1 November 2024 12:29pm
Hi Lucy,
I now realised it is an old thread and you most likely have already found a solution long ago but this might be of interest to others.
As mentioned previously, it is definitely much better to take moon phase from the date and location. While moon phase in general is not a good proxy for illumination, that moon phase symbol on the video is even worse as it generalises the moon cycle into a few discreet categories. For calculating moon phase you can use suncalc package in R but if you want a deeper look and more detailed proxy for moonlight intensity, I wrote a paper on it
with accompanying R package called moonlit
GitHub - msmielak/moonlit: moonlit - R package to estimate moonlight intensity for any given place and time
moonlit - R package to estimate moonlight intensity for any given place and time - msmielak/moonlit
When it comes to temperature I also agree that what is recorded in the camera is often very inconsistent so unless you have multiple cameras to average your measurements you are probably better off using something like NCEP/NCAR Reanalysis (again, there is an R package for that) but if you insist on extracting temperature from the picture, I tried it using tesseract and wrote a description here:
Good luck!
AI triggered camera trap - monkeys
13 October 2024 11:45am
16 October 2024 8:14am
This is chatgpt's answer:
"Here’s a rough idea of the number of parameters in different types of models commonly used in Edge Impulse:
- Neural Networks for Classification (e.g., MLP, CNN)
- Small, shallow models: These can contain as few as 10,000 to 50,000 parameters when dealing with simple tasks like basic image recognition or sensor data classification.
- Medium complexity models: For more advanced tasks, such as keyword spotting or more detailed image recognition, models may contain 50,000 to 200,000 parameters.
- Larger models (Edge Optimized CNNs): For tasks like more complex object detection, models can go up to 200,000 to 500,000 parameters while still being optimized for edge devices.
- Recurrent Neural Networks (RNN, LSTM)
- These are often used for time-series data like accelerometer or audio signals. A typical RNN model for edge deployment might have 10,000 to 100,000 parameters, depending on the length and complexity of the sequences being analyzed.
- Transfer Learning Models (e.g., MobileNet, EfficientNet variants)
- Transfer learning models like MobileNet, when adapted for edge deployment, are typically pruned and quantized to reduce their size. These models may contain anywhere from 1 million to 3 million parametersin their full form, but when trimmed for edge deployment, they could be reduced to 100,000 to 1 million parameters.
- Classical Machine Learning Models (e.g., Decision Trees, KNN)
- These models usually don’t have parameters in the same way as deep learning models, but they are lightweight and typically optimized for embedded use. They might only use a few kilobytes of memory or less, depending on the dataset size and number of features.
Overall, the model sizes in Edge Impulse are kept small to run efficiently on microcontrollers with limited memory (usually in the range of 256 KB to 1 MB of RAM and a few megabytes of flash). The number of parameters is carefully balanced to ensure good accuracy while maintaining small memory footprints for edge deployment."
Disclaimer: I can't guarantee that all of the above is not a hallucination.
So, not even 1 million parameters. yolov3 tiny model back in the day was 6 million parameters. It's my understanding that the idea behind edge impulse was that it's retrained up at the edge itself. Training requires a lot of memory than inference so by it's nature the models would be further constrained by this.
It's important to be aware of what your model is (How advanced it is), what size it is and how many parameters it contains when you consider your use case to manage your expectations. Later I will post you a few of the terrrible false positives that even the very best and largest models have made to give you an idea of just what AI is actually learning and by extension not learning. So if the best of the best can make terrible mistakes, what will small early models do?
In general though, a large advanced model can do much better at inference with a relatively small number of annotated training images than a really small model with potentially thousands and many more images.
For example, I decided way back in 2019 that yolov3 tiny model was absolutely useless for security because of the boy who cried wolf effect.
The biggest use case I see for very small models would be as a hopefully better replacement to using a PIR sensor. I say hopefully, because both PIR sensors and very small models can have de-generate environmental conditions that cause large number of false positives. At a certain threshold, any perceived gain in power usage may disappear.
I welcome edge impulse to correct anything above that I may have said that was untrue as I personally have never used edge impulse, but I was involved in a project whereby yolov6 was pitted against and compared against a system using edge impulse.
21 October 2024 2:56pm
Very impressive Kim! It actually took me a minute to find them in the first picture! Are they baboons (if they aren't then this is definitely proof that I shouldn't creat any training datasets!)? Can't wait for the thermal tests!!
Cheers,
Rob
21 October 2024 3:08pm
Yes baboons. But this area has a few different primate species. In the weekend three species were present.
Yes, the thermal ones will be spectacular. I wondering what other species will jump into the picture when we start with that. The 640x512 thermal camera I'm testing is really amazing. Now I'm onto testing a 1280x1024 resolution thermal camera. That's going to be super interesting! As soon as I have some nice images I'll post them.
Recycled & DIY Remote Monitoring Buoy
15 January 2024 1:14am
3 October 2024 9:16am
Hi Brett,this ocean lab sychelles is great to hear from you! I love your remote monitoring buoy project, especially the recycling of DFAD buoys and the design of custom-built ones that are full of interesting features. The expandability of the navigation buoys enabling it to be configured as hydrophones, cameras, or water quality monitors is a great feature. The technical marvels, such as recycling DFADs' circuits, making your own hydrophone, and designing a super-efficient system, are terrific. Moreover, it is encouraging to witness the deployment of your early system, particularly the live video and water quality data you have collected. You will definitely make a positive impact on the Wildlabs community with your knowledge of electrical engineering and software development. I care to hear more of your discoveries and any news that you will be sharing. Keep doing what you excel at!
8 October 2024 8:46am
Thank you for sharing!
21 October 2024 12:45am
Hi Brett,
I am interested in developing really low cost multi channel underwater acoustic recorders. Can you tell me a bit more about the board and stuff you were using for yours? You can get me at chris@wilcoanalytics.org.
Thanks,
Chris
Deepfaune v1.2!
4 October 2024 4:12pm
9 October 2024 12:01am
Edit: SOLVED Thanks!
Thank you so much for this awesome work! I was trying to load the v2 model the same way as in classifTools.py:
model = timm.create_model(backbone="vit_large_patch14_dinov2.lvd142m", pretrained=False, num_classes=len(class_names))
ckpt = torch.load(weight_path='deepfaune-vit_large_patch14_dinov2.lvd142m.v2.pt', map_location=device)
state_dict = ckpt['state_dict']
new_state_dict = {k.replace('base_model.', ''): v for k, v in state_dict.items()}
model.load_state_dict(new_state_dict)
but it fails with this error:
Error(s) in loading state_dict for VisionTransformer:
size mismatch for head.weight: copying a param with shape torch.Size([30, 1024]) from checkpoint, the shape in current model is torch.Size([26, 1024]).
size mismatch for head.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([26]).
Are you using a different backbone for v2? I tried BACKBONE= '"vit_large_patch14_dinov2.lvd142m.v2" but that also doesn't work.
15 October 2024 12:53pm
For the record now that this is here:
This error typically occurs when the wrong number of classes is given to timm.create_model.
You should try to specify num_classes=30 manually (for this v1.2, number can change in future versions as we add new species).
Also, for issues please do reach out (as Jennifer did) by email, we will be much more responsive. More general questions/discussions can be asked here, I will reply asap.
Your thoughts on this camera
8 October 2024 10:32am
10 October 2024 8:58am
Sure, I'd like to , but don't have the badge yet. ;)
10 October 2024 9:02am
Hi Carly,
I've got you point, that's the reason why we want to make a new edition of our Planet Watch initiative to support the scientific community.
Best regards,
Adrien
14 October 2024 8:21am
Product added to the Inventory : Tikee
Support no-code custom AI for camera trap images by filling out this survey
2 October 2024 10:43pm
4 October 2024 10:04pm
Hi all! Folks may be interested in the Cornell Tree Climbing program that is a part of Cornell Outdoor Education. Not only does Cornell offer training, and have a bunch of online resources, but they have also facilitated groups of scientists to collect canopy samples and data.
Cornell Tree Climbing | Student & Campus Life | Cornell University
CTC promotes safe and environmentally responsible tree climbing techniques for