Camera traps have been a key part of the conservation toolkit for decades. Remotely triggered video or still cameras allow researchers and managers to monitor cryptic species, survey populations, and support enforcement responses by documenting illegal activities. Increasingly, machine learning is being implemented to automate the processing of data generated by camera traps.
A recent study published showed that, despite being well-established and widely used tools in conservation, progress in the development of camera traps has plateaued since the emergence of the modern model in the mid-2000s, leaving users struggling with many of the same issues they faced a decade ago. That manufacturer ratings have not improved over time, despite technological advancements, demonstrates the need for a new generation of innovative conservation camera traps. Join this group and explore existing efforts, established needs, and what next-generation camera traps might look like - including the integration of AI for data processing through initiatives like Wildlife Insights and Wild Me.
Group Highlights:
Our past Tech Tutors seasons featured multiple episodes for experienced and new camera trappers. How Do I Repair My Camera Traps? featured WILDLABS members Laure Joanny, Alistair Stewart, and Rob Appleby and featured many troubleshooting and DIY resources for common issues.
For camera trap users looking to incorporate machine learning into the data analysis process, Sara Beery's How do I get started using machine learning for my camera traps? is an incredible resource discussing the user-friendly tool MegaDetector.
And for those who are new to camera trapping, Marcella Kelly's How do I choose the right camera trap(s) based on interests, goals, and species? will help you make important decisions based on factors like species, environment, power, durability, and more.
Finally, for an in-depth conversation on camera trap hardware and software, check out the Camera Traps Virtual Meetup featuring Sara Beery, Roland Kays, and Sam Seccombe.
And while you're here, be sure to stop by the camera trap community's collaborative troubleshooting data bank, where we're compiling common problems with the goal of creating a consistent place to exchange tips and tricks!
Header photo: Stephanie O'Donnell
DIY electronics for behavioral field biology



- 6 Resources
- 78 Discussions
- 4 Groups
- @vfhawkinson
- | she/her
University of Washington
PhD student assessing multi-scalar livestock-wildlife interactions in the American West
- 0 Resources
- 1 Discussions
- 8 Groups
Max Planck Institute of Animal Behavior
Coordinator for the Animove Workshop


- 1 Resources
- 1 Discussions
- 4 Groups
Centre national de la recherche scientifique (CNRS)
Behavioural ecologist @CNRS in France - working on large mammals in Europe and Africa



- 0 Resources
- 9 Discussions
- 6 Groups
I'm a software developer. I have projects in practical object detection and alerting that is well suited for poacher detection and a Raspberry Pi based sound localizing ARU project



- 0 Resources
- 406 Discussions
- 7 Groups
- @tkswanson
- | she/her
San Diego Zoo Wildlife Alliance
Research Coordinator II for the Conservation Technology Lab at SDZWA

- 2 Resources
- 2 Discussions
- 7 Groups
- @akshayanc
- | She/her
Hi, I'm Akshaya from India. I have completed MSc in Ecology with a specialization in Ecological Informatics at Digital University Kerala.
- 0 Resources
- 0 Discussions
- 2 Groups
WILDLABS & Fauna & Flora
I'm the Platform and Community Support Project Officer at WILDLABS! Speak to me if you have any inquiries about using the WILDLABS Platform or AI for Conservation: Office Hours.



- 26 Resources
- 41 Discussions
- 6 Groups
- @Frank_van_der_Most
- | He, him
RubberBootsData
Field data app developer, with an interest in funding and finance





- 54 Resources
- 177 Discussions
- 9 Groups
- @TaliaSpeaker
- | She/her
WILDLABS & World Wide Fund for Nature/ World Wildlife Fund (WWF)
I'm the WILDLABS Research Specialist at WWF-US



- 23 Resources
- 62 Discussions
- 25 Groups
- @davidhunter
- | he/him
University of Colorado Boulder
PhD student exploring design and technology to connect people with nature and the environment



- 0 Resources
- 22 Discussions
- 5 Groups
WILDLABS & Wildlife Conservation Society (WCS)
I'm the Bioacoustics Research Analyst at WILDLABS. I'm a marine biologist with particular interest in the acoustics behavior of cetaceans. I'm also a backend web developer, hoping to use technology to improve wildlife conservation efforts.





- 27 Resources
- 34 Discussions
- 34 Groups
I put together some initial experiences deploying the new SpeciesNet classifier on 37,000 images from a Namibian camera trap dataset and hope that sharing initial impressions might be helpful to others.
23 April 2025
A nice resource that addresses the data interoperability challenge from the GBIF.
4 April 2025
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective!
1 April 2025
The FLIR ONE thermal camera is a compact and portable thermal imaging device capable of detecting heat signatures in diverse environments. This report explores its application in locating wild animals across large areas...
27 March 2025
WWF's Arctic Community Wildlife Grants program supports conservation, stewardship, and research initiatives that focus on coastal Arctic ecology, community sustainability, and priority Arctic wildlife, including polar...
7 March 2025
The Smithsonian’s National Zoo and Conservation Biology Institute (SNZCBI) is seeking an intern to assist with multiple projects related to conservation technology for wildlife monitoring. SNZCBI scientists collect data...
3 March 2025
Article
NewtCAM is an underwater camera trap. Devices are getting deployed worldwide in the frame of the CAMPHIBIAN project and thanks to the support of our kind early users. Here an outcome from the UK.
24 February 2025
Osa Conservation is launching our inaugural cohort of the ‘Susan Wojcicki Research Fellowship’ for 2025, worth up to $15,000 per awardee (award value dependent on project length and number of awards given each year)....
10 February 2025
Did someone read/know this book?
9 February 2025
The worst thing a new conservation technology can do is become another maintenance burden on already stretched field teams. This meant Instant Detect 2.0 had to work perfectly from day 1. In this update, Sam Seccombe...
28 January 2025
This leads to an exciting blog we did recently, it also includes a spatial map indicating elephant movement tracks of an orphaned elephant who self released himself into the wild (Kafue National Park). Cartography was...
28 January 2025
The Zoological Society of London's Instant Detect 2.0 is the world's first affordable satellite connected camera trap system designed by conservationists, for conservationists. In this update, Sam Seccombe describes the...
21 January 2025
June 2025
July 2025
April 2025
March 2025
event
58 Products
Recently updated products
4 Products
Recently updated products
Description | Activity | Replies | Groups | Updated |
---|---|---|---|---|
Hi Dan, Not right now but I can envision many uses. A key problem in RS is data streams for validation and training of ML models, its really not yet a solved problem. Any... |
|
Emerging Tech, AI for Conservation, Animal Movement, Build Your Own Data Logger Community, Camera Traps, Connectivity, Conservation Tech Training and Education, Data management and processing tools, Geospatial, Sensors | 3 months ago | |
Hello!I'm searching for a solution to tag EXIF metadata to images on an embedded device.I'm currently developing a camera trap that... |
|
Camera Traps | 2 months 3 weeks ago | |
Thanks! Yes, we added electronics to power an external UV light only during periods when the camera is set to take pictures. |
+23
|
Autonomous Camera Traps for Insects, Camera Traps | 2 months 3 weeks ago | |
I don't know any, but we have the same program idea (basically democratizing resource on conservation tech) that focuses on Indonesia region. But we are progressing slow.... |
|
Camera Traps | 2 months 3 weeks ago | |
Hey Bob, thanks for the kind words! Your articles on Winterberry Wildlife have really been a big inspiration for me! There are extremely limited numbers of articles on trial... |
|
AI for Conservation, Camera Traps | 2 months 3 weeks ago | |
Short update: the latest version 13.0.9 of Firetail is now available from https://www.firetail.de |
|
Animal Movement, Camera Traps, Software Development | 3 months 1 week ago | |
Thank you, Dan -- I should have known you would have something like this! The tesseract package in R was quite simple to tune for my case, so I'm going to run it in batch tomorrow... |
|
Camera Traps | 3 months 1 week ago | |
Hi Lucie @luciegallegos ,Great to see ecoSecrets and happy to collaborate in any way I can! All EcoAssist's models are open-source, and the inference code too. With regards to... |
+10
|
Software Development, AI for Conservation, Camera Traps | 3 months 1 week ago | |
Another question. Right now pretty much all camera traps trigger on either PIR sensors or small AI models. Small AI models would tend to have a limitation that they would... |
|
AI for Conservation, Camera Traps, Data management and processing tools, Open Source Solutions, Software Development | 3 months 1 week ago | |
Thanks so much!! |
+10
|
Camera Traps, Latin America Community | 3 months 1 week ago | |
Some thoughts as I have experience working with some of the tech mentioned... Corrodible pin@htarold Did a great job explaining how that works. This pin is used in the pop-up... |
+10
|
Camera Traps | 3 months 1 week ago | |
Hi, I'm collecting screenshots (and explanation if needed) of visualisations that you found useful. It could be charts, maps, tables... |
|
Data management and processing tools, Animal Movement, Camera Traps | 3 months 1 week ago |
Second generation Automated moth monitoring system
22 May 2025 2:34am
Prototype for exploring camera trap data
20 January 2025 10:23pm
7 May 2025 10:49am
@Jeremy_ For the Python implementation of basic occupancy models (as suggested by @ollie_wearn ), please refer to these two projects:
I second @martijnB suggestion to use spatially explicit occupancy models (as implemented in R, e.g., https://doserlab.com/files/spoccupancy-web/). However, this would need to be added to both of the aforementioned Python projects.
17 May 2025 10:53am
Lively and informative discussion, I would very much like to contribute if there is some active development work with regards to this.
I have recent experience with using Model Context Protocol (MCP) to integrate various tools & data repositories with LLMs like Claude. I believe this could be a good idea/path whereby we can do the following:
1. use the images & labels along with any meta-data, chunk/index/store it in vector db
2. integrate with existing data sources available by exposing the data through MCP server
3. Use MCP friendly LLM clients (like Claude) to query, visualize and do other open-ended things leveraging the power of LLM and camera trap data from various sources.
Regards,
Ajay
Boombox Workshop at ICCB 2025
16 May 2025 8:18am
Technology in Wildlife Welfare Workshop (in-person, UK)
6 May 2025 7:46pm
'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes
5 February 2025 5:55pm
8 February 2025 4:29pm
Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.
2 May 2025 2:59pm
Hi everyone, following up here with a summary of our workshop!
The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador. Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.
Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.
The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.
We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.
Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy. All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching.
Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.
Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.
- The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
- Keep the objectives, number of packages, and size of sample datasets to a minimum.
- Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.
I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.
2 May 2025 2:59pm
AI Edge Compute Based Wildlife Detection
23 February 2025 5:24am
29 April 2025 3:20pm
Sorry, I meant ONE hundred million parameters.
The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...
Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.
Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable.
29 April 2025 3:34pm
Ah I see, thanks for clarifying.
BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.
My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)
The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.
I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.
But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.
1 May 2025 7:32pm
@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.
We are releasing SpeciesNet
3 March 2025 4:48pm
28 April 2025 12:30pm
This is great news!
I am using rather high resolution images and have just ordered some 4K (8MP) camera traps.
The standard megadetector run via Addax AI is struggling a bit with detecting relatively small animals (frame wise) although they have quite a number of pixels. This naturally follows from the resizing in megadetector.
I have noticed :
MegaDetector/megadetector/detection/run_tiled_inference.py at 472460d7da7de84027282841b5b775664a4305ed · agentmorris/MegaDetector · GitHub
MegaDetector is an AI model that helps conservation folks spend less time doing boring things with camera trap images. - MegaDetector/megadetector/detection/run_tiled_inference.py at 472460d7da7de84027282841b5b775664a4305ed · agentmorris/MegaDetector
but this seem not readilly available in Addax AI. Is it somehow supported in SpeciesNet?
Cheers,
Lars
30 April 2025 11:29am
This scenario is not supported by SpeciesNet, but if your species are well supported in its training data, maybe we can work out a custom setup. Can you share what species you're seeing/expecting as "small animals"?
30 April 2025 7:00pm
Hi Ștefan!
In my current case, I am trying to detect and count Arctic fox pups. Unfortunately, Arctic fox does not seem to be included in the training data of SpeciesNet but even if it was, pups look quite different from adults.
After a quick correspondance with Dan Morris and Peter van Lunteren on the Addax AI gitHub I was made aware of the image size option of MegeDetector. It seem to help somewhat to run the detection at full resolution (in my case up to 1920*1080). I have the impression that I get more good detections, but also less false detections (even without repeat_detection_elimination) by using higher resolution.
Dan offered to have a look at my specific challenge so I sent him 10K+ images with fox pups.
No-code custom AI for camera trap images!
25 April 2025 8:33pm
28 April 2025 7:03am
When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.
28 April 2025 3:57pm
We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.
Hope that helps clear up the confusion. Happy to go on and on (and on)...
28 April 2025 5:58pm
Interesting. Thanks for the explanation. Nice to hear your passion showing through.
First battery-only test of the SBTS thermal smart camera system
16 March 2025 6:25pm
19 April 2025 3:16pm
Nice thermal camera study with the product you mention. I think it’s the first serious work with thermal AI models. Kudos to the developers.
You should be aware of the resolution difference with this and the one I present with this article. The one above is a flir lepton with 160x120 pixels resolution, whereas the one I’m presenting is 640x512 resolution, 45x more pixels. That’s a lot of extra resolution.
The DOC AI unit above is cheaper than the unit I present which it also includes AI object detection.
I can offer a range of resolutions. Currently 640x512 and 384x288 and 1280x1024 as well as a large range of lenses including zoomables lens if you like such as 30-180mm thermal lens together with a 1280x1024 resolution thermal sensor.
28 April 2025 4:58pm
Thanks for the insight Kim! It's awesome what you are doing! I am excited for any updates.
My collegues who are looking into getting a customised 2040 DOC AI Thermal camera need something that has the battery life to be left in the field for weeks due to the remoteness of the survey sites.
28 April 2025 5:56pm
Weeks with continuous inference would require a pretty big battery. I expect you would need some kind of customisation and maybe quite a bit of compromise to last weeks and on a single battery. Good luck with that. Power management is challenging.
Overview of Image Analysis and Visualization from Camera traps
28 April 2025 8:09am
SpeciesNet: first impressions from 37k images in Namibia
23 April 2025 11:28pm
What software to use?
6 March 2025 4:39pm
19 April 2025 1:08pm
Apologies as I haven't tried this option: AddaxAI (Previously known as EcoAssist). It can work offline, however currently only works for species for which a specific project has already been developed.
22 April 2025 7:39am
Actually, detection of ~2000 "global" species is now supported in AddaxAI via supoort of SpeciesNet.
23 April 2025 3:28am
That's great!
Dual-/Multi-Use Technology Strategies
1 April 2025 11:46pm
15 April 2025 6:17pm
That is a great point and the current international trade climate has been making supply chain even more difficult. This also deeply affects US companies given much of the US goods manufacturing and assembly happening in China. Over the last few years, I have been seeing US hardware companies (e.g. drone platform and component OEMs) sourcing their goods from India, Turkey, Canada, and more recently in African and South American nations. Because of the last 3-to-5 years of increasingly restrictive and costly international hardware trade, there has been a emergence of specialized component manufacturers internationally. For European companies interested in providing hardware services to the US, I would suggest diversifying the supply chain beyond China. Given the current climate and trends, that added supply chain resilience may be a good idea, regardless of work with the US.
15 April 2025 7:36pm
This is more than the supply chain though. The point was the company itself cannot use any tech for anything from the 5x companies. So in my case my ISP is incompatible. Essentially I see the only companies making that kind of sacrifice are ones that want to devote themselves to defence only.
Of course. That’s US defense as a customer. European defence is fully on the table.
It’s just sad that it’s not restricted to defence. US government wildlife organisations cannot buy European tech unless that European company was pure in their eyes.
15 April 2025 8:37pm
True, the US ecosystem is a challenging space right now, for basically all sectors.
We should not let the US chaos prevent us from engaging with opportunities in other nations' multi-use markets. A company's ability and journey to tap into other markets is very unique to them (product, team, finances, infrastructure, agility), and some simply cannot adapt. There is no one size fits all (or even most) solution when it comes to multi-use strategies. It is important that we are systematic about evaluating the cost to adapt our product-service to a different market, and the value of new opportunities in that new market, without losing track of underlying conservation and social good needs.
Conservation Applications for Google Solar API
12 April 2025 11:35am
14 April 2025 12:54pm
Do you know if there are plans to expand this beyond the global North? Many folks in this community are living and/or working outside of the areas where this dataset is limited to and thus would not be able to make use of it.
14 April 2025 6:27pm
Yes, I know about this big limitation,
As far as I know they are working to increase the coverage available for this solution.
For trusted developers, there are more regions available.
More information about the coverage can be found here:
DeepFaune v.1.3 is out!
14 April 2025 3:50pm
Remote downloads of camera traps
10 April 2025 4:24pm
Auto-processing of Indian images with Camera traps
8 April 2025 7:30am
8 April 2025 6:29pm
[Full disclosure: this question was also posted to the AI for Conservation Slack, and I'm copying and pasting my answer from there to here.]
The first thing I would recommend is putting a very fine point on what you mean by "automated". There are close to zero cases where camera trap image processing is fully automated in the sense of every image being classified to species level without human intervention, and the cases where that happens tend to be pretty simple cases with a small number of very stable cameras, or a very small number of species (e.g. in semi-captive environments). That doesn't mean automation is impossible, but total automation with 100% accuracy is *probably* impossible, so you have to pick the compromises you're comfortable with. I recommend thinking about what you care about most and how much human time you can afford to get there... e.g., maybe you want to make sure you achieve 90% recall on species x/y/z, and you don't care what happens to the other species, and you can have humans spending k hours per month on image review.
Once you have goals that are as quantitative as possible and you're focused on efficiency, rather than total automation, it becomes easier to evaluate existing systems, whether or not they are even aware of your specific species (this is not always a requirement for an AI system to help you meet your goals, as long as that AI system has seen species that are visually similar to your species).
To help people get pointed in the right direction wrt choosing a system, I usually start with this series of questions:
http://lila.science/camera-trap-questions
Maybe those questions are useful just to get you thinking about your system, but if you want to talk through the implications of some of the answers, feel free to reply here or to email me at agentmorris@gmail.com .
Best Practices for Managing and Publishing Camera Trap Data
4 April 2025 3:10pm
Nature Tech for Biodiversity Sector Map launched!
1 April 2025 1:41pm
4 April 2025 1:57pm
Anyone used Starlink with camera traps before?
11 September 2024 4:58am
15 October 2024 2:41pm
Hopefully the extra batteries Graeme has ordered will help with a bit of redundancy for rainy/cloudy days and at night. As you and @tve suggested @kimhendrikse, something akin to a 'threshold' minimum amount of storage might be a good way to manage the system/uploads - since Graeme has that solar controller app., he can actually see when there's enough charge/not enough (I think?), so perhaps there's an alert for a low level of battery charge @GraemeTonkin?
Cheers,
Rob
15 October 2024 7:15pm
Really nice to see you put so much effort in for the birds Graeme. Top!
29 March 2025 2:24pm
Using some cheap WiFi router (brand Comfast) which has a USB port where I connected a USB SSD drive 1GB. I have flashed it with OpenWRT, rebuilt ffmpeg (as ffmpeg-custom) to support proprietary codecs of my cameras and using some shell scripts of mine for the recording. The cameras are:
- Hamrol rtsp://$IP_ADDRESS:554/user=${LOGIN}_password=${PASSWORD}_channel
- Tesla Czech rtsp://admin:admin@$IP_ADDRESS:8554/Streaming/Channels/101
That is the part recording 8 cameras to the SSD drive (1GB fits 10 days). And then I have some code of mine to be streaming it from the SSD drive over Starlink to my VPS server.
It would work fine but the Starlink streaming part I had to stop as my Starlink router gets very very hot (I cannot hold my hand on it) and around noons Starlink is restarting multiple times. It is Starlink 2nd generation. Unaware whether due to the router being hot or due to the antenna being hot - I see no "hot" message in the Starlink Android app which some people say one could see. It is very hot here during summer in Philippines but the antenna is high on a tree so I cannot touch it (but it must be very hot even just from the sun).
I am investigating how to cool down at least the router. Then maybe I could be streaming at least during nights when a burglary is more probable and they could steal even the SSD drive. Or maybe Starlink 3rd gen. is no longer so overheating? Dunno.
Turn old smartphone into IA camera trap?
24 September 2021 1:08pm
3 December 2024 12:08pm
Hi @rcz133 do you have any news regarding your project? thanks for sharing :)
13 December 2024 8:00pm
Yes!
I actually have a prototype up. Short summary is that it does most of the things I wanted it to do, but consumes too much power in the process.
Longer story: The prototype demonstrates the following tech:
- Looping video which allows “negative trigger delay”
- Daytime trigger via camera sensor
- Nighttime trigger via built-in LIDAR sensor
- Trigger based on MegaDetector 6c
- Saves video into iPhone video library
Unfortunately, as it stands (and as was noted earlier in this thread), it consumes too much power while doing all this. The problem, BTW, is not the AI stuff, or even the active LIDAR sensor – it’s keeping the basic video processing pipeline -- which is necessary for the "negative trigger delay", and the AI-based trigger mechanism.
This is definitely in the “proof of concept” and debug phase, and lacks such “niceties” as robustness, UI design, etc. It’s very far away from the Apple store .
I’ve taken a break from the project to contemplate options for power reduction. I’ve been meaning to cleanup and make public the GitHub site with the prototype code, as well as some documentation. If there’s interest in this group, especially by anyone with time/interest to look more closely at the power problem, let me know, and I’ll move it up my priority list
28 March 2025 9:29am
Hi @rcz133 thanks for the update it looks promissing! An external motion PIR as @EDsteve develloped could help reduce the power consumption related to the video pipeline, but so it required additional part, that the smartphone itself.
As you share it maybe someone will have a solution and we stay in touch for potential update.
Thanks,
Best,
Graphic interface for wildlife species ID of camera trap data?
27 March 2025 4:02pm
27 March 2025 7:45pm
Training remains somewhat more of a hassle than inference, but thanks to a WILDLABS grant, our friendly neighborhood machine learning folks at DrivenData are working to narrow that gap:
https://wildlabs.net/discussion/wildlabs-awards-2024-no-code-custom-ai-camera-trap-species-classification
I know that's just a post about a thing they are going to build, but I wouldn't have posted that link if I weren't 100.0000% confident that it was not vaporware, so, stay tuned.
Also, I know I'm super-biased, so take this with a grain of salt, but in my experience the recent release of our SpeciesNet model changes the equation significantly re: when it's worth training your own model. SpeciesNet saw tons of PNW data in training, and I've run it on a gazillion images from the PNW since its release, and it works quite well. If it doesn't work as well as you'd like out of the box, consider doing some postprocessing as an alternative to training your own model. It can be tempting to compare a model that doesn't exist yet to one that does, and assume that the former will be perfect, but no model is perfect, and even if the custom model would be a little better, the time it takes to train and maintain a custom model may be more than the time you would save thanks to the delta in accuracy.
OK, taking off my super-biased hat now, YMMV.
Time-lapse cameras for monitoring nesting birds in the Arctic
11 January 2023 6:37am
20 January 2023 1:03am
Thanks Alasdair!
The Plotwatchers and Brinnos didn't require any solar for the 500K on 4aa batteries. We place the cams near the nest (actually at the nest peering into the nest bowl with a new design I came up with where the only thing above ground is a ribbon cable and the camera board attached to a metal rod we lag blot to the tundra, the batteries and main board are in a 1020 pelican case and buried; see below for an image of the above-ground portion of the cam and an image of a nest from a cam [if you look closely you can see one of the eggs just hatched and there are now 3 eggs and 1 chick in the bowl]).
I'd be very interested in what you all are working on for the next design. How small would it be?
27 January 2023 11:28am
Thanks Chris,
Probably quite similar in size to your existing setup above, but we'd use two Li-ion rechargable batteries most likely (could be an 18500). I'll be sure to share more information later this year.
Cheers,
Alasdair
27 March 2025 4:03pm
Hi all,
I'm interested in this post for another context, tropical rainforest. I want to monitor forest cleareance in the Congo bassin rainforest through time-lapse images. Unfortunately, the Bushnell camera I used got stolen, and so I'm looking for a tiny time-lapse camera that will be more difficult to detect.
@chris_latty The picture you shared in comment looks promising. How did you do it? from a Brinno camera? Thanks.
Technical Report: Using the FLIR ONE Thermal Camera to detect wild animals
27 March 2025 11:57am
Nature Tech Unconference - Anyone attending?
8 March 2025 12:11pm
15 March 2025 8:28am
Definitely!
21 March 2025 12:07pm
The Futures Wild team will be there :)
26 March 2025 7:54pm
Yep see you on friday
ICCB 2025 – Let’s Connect!
9 March 2025 5:06am
12 March 2025 1:55pm
Hi everyone, I’m excited to become a member of Wild Lab! I’m currently working on my master’s thesis, focusing on dormouse conservation. My research explores the behavioral responses of dormice to temperature and habitat patterns using camera trap data.
Additionally, I’d like to incorporate agent-based modeling to simulate species behavior. However, I’m a bit unsure about how to effectively apply modeling for predictions. If anyone here has experience with modeling, I’d love to connect and discuss!
Looking forward to learning from you all.
Best regards,
Cellular Camera Traps in Europe
6 March 2025 12:17pm
10 March 2025 3:46am
We manufacture the DOC AI Cam. It's a thermal camera, so particularly good for nocturnal animals. It comes with a 4G modem. It might meet your needs?
Blurring of humans in camera trap data
20 February 2025 9:31am
27 February 2025 11:26am
Out of curiosity, as we have some projects collecting videos, what would be the best solution for blurring video content?
27 February 2025 11:26am
Thank you for sharing!
7 March 2025 5:36pm
The magic of the WILDLABS community! @Chelsea_Smith @dmorris @pvlun thanks for showing everyone how it's done :) Chelsea, would be great to hear how this works out for you whenever you can share an update!
Arctic Community Wildlife Grants Program
7 March 2025 3:59pm
Waterproof wildlife cam options for island
5 March 2025 8:37pm
6 May 2025 4:53pm
I think the app sort of covers this, using the pie chart layover on leaflet in the activity tab. However, it would be nice to have a more direct way of visualizing the species richness (i.e. scale the radii of the circle markers with the number of species detected). In addition to this you may want to think about visualizing simple diversity indices (there's alpha diversity which captures the species richness at each trap, gamma diversity to summarize the total richness in the area and beta diversity to assess how different species compositions are between traps). Note: I do not use diversity indices often enough to provide more specific guidance. @ollie_wearn is this what you were referring to?
I can help you with setting up the models in R, Python or Stan/ C++. But I have no idea on how to overcome the technical challenge that you are referring to. I agree with @ollie_wearn that allowing variable selection and model building would take this to far. One thing I would suggest is to allow users to fit a spatially-explicit version of the basic occupancy model (mind that these can be slow). This type of model leverages the correlations in species detections between trap locations to estimate differences in occupancy across the study area rather than just estimating a single occupancy for the entire area.
Yes, I would leave users with the following options: a UI-switch that let's them pick either:
to visualize on the leaflet map - and barchart on the side in the activity tab.
Regarding density: you could add a tab that calculates density using the Random Encounter Model (REM), which is often used when estimating density of unmarked animals without info on recaptures.
Regarding activity patterns: I would also add a tab were users can visualize either diel or annual activity cycles (often called activity patterns) computed through the activity R-package (or integrate this in an existing tab). And maybe even allow computing overlap in daily activity cycles among selected species.
If you manage to include all of these, then I think your app covers the 90% of use cases.
Some other features worth discussing: