Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

Prototype for exploring camera trap data

Hi, I would like to start a discussion around a prototype that aims at improving consumption of camera trap data. How is it different (in theory) from existing tools? I...

36 4
  • Regarding species richness, isn't it covered by the activity tab where we can see the entire list of detected species? What else do you think would be helpful? I can imagine a better UI focusing on species with more information for each species and easier search but the raw info would be roughly the same.

I think the app sort of covers this, using the pie chart layover on leaflet in the activity tab. However, it would be nice to have a more direct way of visualizing the species richness (i.e. scale the radii of the circle markers with the number of species detected). In addition to this you may want to think about visualizing simple diversity indices (there's alpha diversity which captures the species richness at each trap, gamma diversity to summarize the total richness in the area and beta diversity to assess how different species compositions are between traps). Note: I do not use diversity indices often enough to provide more specific guidance. @ollie_wearn is this what you were referring to?

 

  • Regarding occupancy, that makes sense to me. Only challenge is I'm not using python or R at the moment because and we use javascript for easier user interfaces with web technologies. It's still possible to add python support but I'd delay as much as possible to keep things simple. Anyway, it's a technical challenge that I'm pretty sure I can solve.

I can help you with setting up the models in R, Python or Stan/ C++. But I have no idea on how to overcome the technical challenge that you are referring to. I agree with @ollie_wearn that allowing variable selection and model building would take this to far. One thing I would suggest is to allow users to fit a spatially-explicit version of the basic occupancy model (mind that these can be slow). This type of model leverages the correlations in species detections between trap locations to estimate differences in occupancy across the study area rather than just estimating a single occupancy for the entire area. 

 

  • What about relative abundance? My readings always mention occupancy, species richness, abundance and density. I've read and ask enough about density to know that it's hard for unmarked animals and might not be necessary. If I understood correctly, relative abundance can give similar insights and as a conservationist, you probably want to know the trend of relative abundance over time.

Yes, I would leave users with the following options: a UI-switch that let's them pick either: 

  1. the number of observations
  2. the number of individuals detected
  3. the relative abundance index or RAI (based on 1.)
  4. the RAI (based on 2.)

to visualize on the leaflet map - and barchart on the side in the activity tab.

 

Regarding density: you could add a tab that calculates density using the Random Encounter Model (REM), which is often used when estimating density of unmarked animals without info on recaptures.

 

Regarding activity patterns: I would also add a tab were users can visualize either diel or annual activity cycles (often called activity patterns) computed through the activity R-package (or integrate this in an existing tab). And maybe even allow computing overlap in daily activity cycles among selected species.

 

If you manage to include all of these, then I think your app covers the 90% of use cases.

Some other features worth discussing:

  1. merging/ comparing multiple projects
  2. in line with calculating overlap between activity cycles, allow computing a spatial overlap between two species or even a spatio-temporal overlap 

 

@Jeremy_ For the Python implementation of basic occupancy models (as suggested by @ollie_wearn ), please refer to these two projects:

I second @martijnB suggestion to use spatially explicit occupancy models (as implemented in R, e.g., https://doserlab.com/files/spoccupancy-web/). However, this would need to be added to both of the aforementioned Python projects.

Lively and informative discussion, I would very much like to contribute if there is some active development work with regards to this. 
I have recent experience with using Model Context Protocol (MCP) to integrate various tools & data repositories with LLMs like Claude. I believe this could be a good idea/path whereby we can do the following:
1. use the images & labels along with any meta-data, chunk/index/store it in vector db
2. integrate with existing data sources available by exposing the data through MCP server

3. Use MCP friendly LLM clients (like Claude) to query, visualize and do other open-ended things leveraging the power of LLM and camera trap data from various sources. 
 

Regards,

Ajay

See full post
discussion

'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes

Thanks to WILDLABS 'Boring Fund' support, we are hosting a workshop on AI for biodiversity monitoring in Medellin, Colombia, April 21st to 24th. This is a followup discussion to...

4 14

Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.  

Hi everyone, following up here with a summary of our workshop!

The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador.  Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.

Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.

The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.

We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.  

Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy.  All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching. 

Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.

Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.

  • The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
  • Keep the objectives, number of packages, and size of sample datasets to a minimum.
  • Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.

I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.    

See full post
discussion

AI Edge Compute Based Wildlife Detection

Hi all!I've just come across this site and these forums and it's exactly what i've been looking for!I'm based in Melbourne Australia and since finishing my PhD in ML I've been...

17 1

Sorry, I meant ONE hundred million parameters.

The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...

Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.

Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable. 

Ah I see, thanks for clarifying.

BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.

My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)

The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.

I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.

But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.

@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.

See full post
discussion

We are releasing SpeciesNet

We're extremely excited to announce (and open source) SpeciesNet today, our AI model for species recognition in camera trap images, capable of recognising more than 2000 animals...

21 20

This is great news!



I am using rather high resolution images and have just ordered some 4K (8MP) camera traps.

The standard megadetector run via Addax AI is struggling a bit with detecting relatively small animals (frame wise) although they have quite a number of pixels. This naturally follows from the resizing in megadetector.

I have noticed :

but this seem not readilly available in Addax AI. Is it somehow supported in SpeciesNet?

Cheers,

Lars

Hi Ștefan

In my current case, I am trying to detect and count Arctic fox pups. Unfortunately, Arctic fox does not seem to be included in the training data of SpeciesNet but even if it was, pups look quite different from adults. 

After a quick correspondance with Dan Morris and Peter van Lunteren on the Addax AI gitHub I was made aware of the image size option of MegeDetector. It seem to help somewhat to run the detection at full resolution (in my case up to 1920*1080). I have the impression that I get more good detections, but also less false detections (even without repeat_detection_elimination) by using higher resolution.

Dan offered to have a look at my specific challenge so I sent him 10K+ images with fox pups.

See full post
discussion

No-code custom AI for camera trap images!

Thanks to our WILDLABS award, we're excited to announce that Zamba Cloud has expanded beyond video to now support camera trap images! This new functionality...

3 3

When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.

We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.

Hope that helps clear up the confusion. Happy to go on and on (and on)...

See full post
discussion

First battery-only test of the SBTS thermal smart camera system

Last night I connected my SBTS local AI, thermal-enabled smart camera to a battery and placed it outside overlooking a field. I hoped to get some video of deer, which I've seen...

5 0

Nice thermal camera study with the product you mention. I think it’s the first serious work with thermal AI models. Kudos to the developers.

You should be aware of the resolution difference with this and the one I present with this article. The one above is a flir lepton with 160x120 pixels resolution, whereas the one I’m presenting is 640x512 resolution, 45x more pixels. That’s a lot of extra resolution.

The DOC AI unit above is cheaper than the unit I present which it also includes AI object detection.

I can offer a range of resolutions. Currently 640x512 and 384x288 and 1280x1024 as well as a large range of lenses including zoomables lens if you like such as 30-180mm thermal lens together with a 1280x1024 resolution thermal sensor.

Thanks for the insight Kim! It's awesome what you are doing! I am excited for any updates. 

My collegues who are looking into getting a customised 2040 DOC AI Thermal camera need something that has the battery life to be left in the field for weeks due to the remoteness of the survey sites.

Weeks with continuous inference would require a pretty big battery. I expect you would need some kind of customisation and maybe quite a bit of compromise to last weeks and on a single battery. Good luck with that. Power management is challenging.

See full post
Link

SpeciesNet: first impressions from 37k images in Namibia

I put together some initial experiences deploying the new SpeciesNet classifier on 37,000 images from a Namibian camera trap dataset and hope that sharing initial impressions might be helpful to others.

3
discussion

What software to use?

I am looking for a reliable camera trap software to process images efficiently. Key considerations:✅ Handles large datasets smoothly✅ Allows for multiple people...

7 0
See full post
discussion

Dual-/Multi-Use Technology Strategies

Hi Everyone, I am new to the WildLabs community and relatively new to conservation technology. I have been working in this space since 2018 (marine and coral focused with NOAA),...

9 1

That is a great point and the current international trade climate has been making supply chain even more difficult. This also deeply affects US companies given much of the US goods manufacturing and assembly happening in China. Over the last few years, I have been seeing US hardware companies (e.g. drone platform and component OEMs) sourcing their goods from India, Turkey, Canada, and more recently in African and South American nations. Because of the last 3-to-5 years of increasingly restrictive and costly international hardware trade, there has been a emergence of specialized component manufacturers internationally. For European companies interested in providing hardware services to the US, I would suggest diversifying the supply chain beyond China. Given the current climate and trends, that added supply chain resilience may be a good idea, regardless of work with the US.

This is more than the supply chain though. The point was the company itself cannot use any tech for anything from the 5x companies. So in my case my ISP is incompatible. Essentially I see the only companies making that kind of sacrifice are ones that want to devote themselves to defence only.


Of course. That’s US defense as a customer. European defence is fully on the table.


It’s just sad that it’s not restricted to defence. US government wildlife organisations cannot buy European tech unless that European company was pure in their eyes.

True, the US ecosystem is a challenging space right now, for basically all sectors. 

We should not let the US chaos prevent us from engaging with opportunities in other nations' multi-use markets. A company's ability and journey to tap into other markets is very unique to them (product, team, finances, infrastructure, agility), and some simply cannot adapt. There is no one size fits all (or even most) solution when it comes to multi-use strategies. It is important that  we are systematic about evaluating the cost to adapt our product-service to a different market, and the value of new opportunities in that new market, without losing track of underlying conservation and social good needs.

See full post
discussion

Conservation Applications for Google Solar API

We are developing a tool to research about possible conservation applications of the recent Google Solar API.Solar API is a Google service to provide high...

2 1

Do you know if there are plans to expand this beyond the global North? Many folks in this community are living and/or working outside of the areas where this dataset is limited to and thus would not be able to make use of it. 

Yes, I know about this big limitation,

As far as I know they are working to increase the coverage available for this solution.

For trusted developers, there are more regions available.





More information about the coverage can be found here:

 



 

See full post
discussion

DeepFaune v.1.3 is out!

Hi, just wanted to let whoever is interested that v.1.3 of DeepFaune is out! Deepfaune is a software that runs on any standard computer and allow you to identify species in...

3
See full post
discussion

Auto-processing of Indian images with Camera traps

My name is Vinay (Linkedin profile) and I have just started a company in Biodiversity conservation from Bangalore, India.  Brief about Urvara.Life ...

1 0

[Full disclosure: this question was also posted to the AI for Conservation Slack, and I'm copying and pasting my answer from there to here.]

The first thing I would recommend is putting a very fine point on what you mean by "automated".  There are close to zero cases where camera trap image processing is fully automated in the sense of every image being classified to species level without human intervention, and the cases where that happens tend to be pretty simple cases with a small number of very stable cameras, or a very small number of species (e.g. in semi-captive environments).  That doesn't mean automation is impossible, but total automation with 100% accuracy is *probably* impossible, so you have to pick the compromises you're comfortable with.  I recommend thinking about what you care about most and how much human time you can afford to get there... e.g., maybe you want to make sure you achieve 90% recall on species x/y/z, and you don't care what happens to the other species, and you can have humans spending k hours per month on image review.

Once you have goals that are as quantitative as possible and you're focused on efficiency, rather than total automation, it becomes easier to evaluate existing systems, whether or not they are even aware of your specific species (this is not always a requirement for an AI system to help you meet your goals, as long as that AI system has seen species that are visually similar to your species).

To help people get pointed in the right direction wrt choosing a system, I usually start with this series of questions:

http://lila.science/camera-trap-questions

Maybe those questions are useful just to get you thinking about your system, but if you want to talk through the implications of some of the answers, feel free to reply here or to email me at agentmorris@gmail.com .

See full post
article

Nature Tech for Biodiversity Sector Map launched!

Carly Batist and 1 more
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective! 

1 0
Thanks for sharing @carlybatist  and @aliburchard !About the first point, lack of data integration and interpretation will be a bottleneck, if not death blow to the whole...
See full post
discussion

Anyone used Starlink with camera traps before?

Hi everyone,A quick question. Does anyone have experience using Starlink with camera traps before? If so, which brand/model of camera did you use, how did you go setting it all up...

31 3

Hopefully the extra batteries Graeme has ordered will help with a bit of redundancy for rainy/cloudy days and at night. As you and @tve suggested @kimhendrikse, something akin to a 'threshold' minimum amount of storage might be a good way to manage the system/uploads - since Graeme has that solar controller app., he can actually see when there's enough charge/not enough (I think?), so perhaps there's an alert for a low level of battery charge @GraemeTonkin?

Cheers,

Rob

Using some cheap WiFi router (brand Comfast) which has a USB port where I connected a USB SSD drive 1GB. I have flashed it with OpenWRT, rebuilt ffmpeg (as ffmpeg-custom) to support proprietary codecs of my cameras and using some shell scripts of mine for the recording. The cameras are:

  • Hamrol rtsp://$IP_ADDRESS:554/user=${LOGIN}_password=${PASSWORD}_channel
  • Tesla Czech rtsp://admin:admin@$IP_ADDRESS:8554/Streaming/Channels/101

That is the part recording 8 cameras to the SSD drive (1GB fits 10 days). And then I have some code of mine to be streaming it from the SSD drive over Starlink to my VPS server.

It would work fine but the Starlink streaming part I had to stop as my Starlink router gets very very hot (I cannot hold my hand on it) and around noons Starlink is restarting multiple times. It is Starlink 2nd generation. Unaware whether due to the router being hot or due to the antenna being hot - I see no "hot" message in the Starlink Android app which some people say one could see. It is very hot here during summer in Philippines but the antenna is high on a tree so I cannot touch it (but it must be very hot even just from the sun).

I am investigating how to cool down at least the router. Then maybe I could be streaming at least during nights when a burglary is more probable and they could steal even the SSD drive. Or maybe Starlink 3rd gen. is no longer so overheating? Dunno.

See full post
discussion

Turn old smartphone into IA camera trap?

I know that there is several IA camera trap development ongoing from the poachercam to trailguard...ects... I also know that it is possible to turn an old phone into a security...

15 2

iPhone trail cameraYes!  

I actually have a prototype up.  Short summary is that it does most of the things I wanted it to do, but consumes too much power in the process.  

Longer story: The prototype demonstrates the following tech:

  • Looping video which allows “negative trigger delay” 
  • Daytime trigger via camera sensor
  • Nighttime trigger via built-in LIDAR sensor
  • Trigger based on MegaDetector 6c
  • Saves video into iPhone video library

Unfortunately, as it stands (and as was noted earlier in this thread), it consumes too much power while doing all this.  The problem, BTW, is not the AI stuff, or even the active LIDAR sensor – it’s keeping the basic video processing pipeline -- which is necessary for the "negative trigger delay", and the AI-based trigger mechanism.    

This is definitely in the “proof of concept” and debug phase, and lacks such “niceties” as robustness, UI design, etc.  It’s very far away from the Apple store .  

I’ve taken a break from the project to contemplate options for power reduction.  I’ve been meaning to cleanup and make public the GitHub site with the prototype code, as well as some documentation.  If there’s interest in this group, especially by anyone with time/interest to look more closely at the power problem, let me know, and I’ll move it up my priority list

 

Hi @rcz133 thanks for the update it looks promissing! An external motion PIR as @EDsteve develloped could help reduce the power consumption related to the video pipeline, but so it required additional part, that the smartphone itself.

As you share it maybe someone will have a solution and we stay in touch for potential update.

 

Thanks,

Best,

See full post
discussion

Graphic interface for wildlife species ID of camera trap data?

There seems to be lots of machine learning options to identify objects and wildlife species in R or Python.  However, if you are R or Python "challenged" like me, it...

1 1

Training remains somewhat more of a hassle than inference, but thanks to a WILDLABS grant, our friendly neighborhood machine learning folks at DrivenData are working to narrow that gap:

https://wildlabs.net/discussion/wildlabs-awards-2024-no-code-custom-ai-camera-trap-species-classification

I know that's just a post about a thing they are going to build, but I wouldn't have posted that link if I weren't 100.0000% confident that it was not vaporware, so, stay tuned.

Also, I know I'm super-biased, so take this with a grain of salt, but in my experience the recent release of our SpeciesNet model changes the equation significantly re: when it's worth training your own model.  SpeciesNet saw tons of PNW data in training, and I've run it on a gazillion images from the PNW since its release, and it works quite well.  If it doesn't work as well as you'd like out of the box, consider doing some postprocessing as an alternative to training your own model.  It can be tempting to compare a model that doesn't exist yet to one that does, and assume that the former will be perfect, but no model is perfect, and even if the custom model would be a little better, the time it takes to train and maintain a custom model may be more than the time you would save thanks to the delta in accuracy.

OK, taking off my super-biased hat now, YMMV.

 

 

See full post
discussion

Time-lapse cameras for monitoring nesting birds in the Arctic

Hi all,I'm a biologist at Arctic NWR and have been using time-lapse cameras for about a decade now to monitor nesting birds. We have used Plotwatcher Pros and Brinno TLC200s with...

8 0

Thanks Alasdair! 

The Plotwatchers and Brinnos didn't require any solar for the 500K on 4aa batteries. We place the cams near the nest (actually at the nest peering into the nest bowl with a new design I came up with where the only thing above ground is a ribbon cable and the camera board attached to a metal rod we lag blot to the tundra, the batteries and main board are in a 1020 pelican case and buried; see below for an image of the above-ground portion of the cam and an image of a nest from a cam [if you look closely you can see one of the eggs just hatched and there are now 3 eggs and 1 chick in the bowl]). 

      

I'd be very interested in what you all are working on for the next design. How small would it be?

Thanks Chris,

Probably quite similar in size to your existing setup above, but we'd use two Li-ion rechargable batteries most likely (could be an 18500). I'll be sure to share more information later this year.

Cheers,

Alasdair

Hi all,

I'm interested in this post for another context, tropical rainforest. I want to monitor forest cleareance in the Congo bassin rainforest through time-lapse images. Unfortunately, the Bushnell camera I used got stolen, and so I'm looking for a tiny time-lapse camera that will be more difficult to detect.

@chris_latty The picture you shared in comment looks promising. How did you do it? from a Brinno camera? Thanks.

See full post
discussion

Nature Tech Unconference - Anyone attending?

Hi all, anyone planning to attend the Nature Tech Unconference on 28th March at the London School of Economics Campus in London, UK? (the event is free to attend but...

8 1

The Futures Wild team will be there :)

See full post
discussion

ICCB 2025 – Let’s Connect!

Hi Everyone,I’m excited to be attending my first ICCB 2025 as a student presenter and early-career researcher! My work sits at the intersection of computational epidemiology and...

1 1

Hi everyone, I’m excited to become a member of Wild Lab! I’m currently working on my master’s thesis, focusing on dormouse conservation. My research explores the behavioral responses of dormice to temperature and habitat patterns using camera trap data.

Additionally, I’d like to incorporate agent-based modeling to simulate species behavior. However, I’m a bit unsure about how to effectively apply modeling for predictions. If anyone here has experience with modeling, I’d love to connect and discuss!

Looking forward to learning from you all.

Best regards,

See full post
discussion

Cellular Camera Traps in Europe

Hi All,Looking for a camera trap that will work on European 4G network (Greece). I've identified a few, but they have questionable reviews/experiences. Has anybody here deployed...

1 0

We manufacture the DOC AI Cam. It's a thermal camera, so particularly good for nocturnal animals. It comes with a 4G modem. It might meet your needs?

See full post
discussion

Blurring of humans in camera trap data

Hi All,I'm looking into tools that can be used to blur images containing humans from camera trap data. So far I've found people using standard photo editing software and come...

15 2

Out of curiosity, as we have some projects collecting videos, what would be the best solution for blurring video content?

The magic of the WILDLABS community! @Chelsea_Smith @dmorris @pvlun  thanks for showing everyone how it's done :) Chelsea, would be great to hear how this works out for you whenever you can share an update!

See full post