Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

Canopy access or tree climbing resources for arboreal research

Hi everyone.I am a professional tree climbing/canopy access trainer and work in conservation tech. I've climbed or trained climbers in several countries to deploy arboreal...

5 5

Hi all! Folks may be interested in the Cornell Tree Climbing program that is a part of Cornell Outdoor Education. Not only does Cornell offer training, and have a bunch of online resources, but they have also facilitated groups of scientists to collect canopy samples and data. 

Hi Dominique,

Thanks for your responses and congratulations on getting trained! 

I can see that speaking directly with a climbing professional could be the most beneficial because what climbing methods and equipment you may need will depend very much on individual project goals and budgets. Did you end up speaking with your trainers about your field research goals and what climbing methods may be best for you? 

Hi Mark, thanks for responding. I think you've identified one of the most difficult parts of research climbing: maintaining your climbing skills and knowledge between field sessions. 

My husband is an experienced arborist and practices his skills almost daily. I am not an arborist, so I schedule climbing time to keep my abilities fresh and my husband is there to assist. But I know it's difficult for individual researchers to practice on their own and they should only be climbing alone if experienced and not in a remote area.

However, it's possible to train researchers to safely climb in the field for multiple field sessions. My husband and I trained a group of climbers in Cameroon in January, 2024. The goal was to train four climbers who would go into the remote rainforest for several weeks and set up camera traps. They would deploy and retrieve arboreal cameras at different survey locations over two years. We needed to train the teams to operate safely and independently (without an instructor present) in very remote areas. 

To train them sufficiently, my husband and I spent 1 month in Cameroon with the field team. We did a few days of basic training at a location near the city and then went with them on their initial camera deployment where we continued field training for 2.5 - 3 weeks. Before going to Cameroon, we had several discussions to determine what climbing method and equipment would best meet their data collection goals and were appropriate for their field site and conditions. We taught them basic rescue scenarios. We also set a climbing practice schedule for the team to maintain their climbing and rescue skills. We strongly emphasized to their management that the field team needed access to the climbing gear and needed to follow the practice schedule. Since the training, the team successfully finished two other camera trap surveys and is planning their third.

This was a lot of up-front training and cost. However, these climbers are now operating on their own and can continue operating for the foreseeable future. I think a big reason is receiving extensive training, tailored to their needs. General tree-climbing courses are great for learning the basics, but they'll never be a substitute for in-field, tailored training.

See full post
discussion

What the mice get up to at night

Last night I set up the following rig to see if there were any mice in the kitchen at night. It consists of my StalkedByTheState smart camera trap system and a 640x512 resolution...

1 2

And I see now they can walk vertically up walls like Spider-Man.


Walking up walls

See full post
discussion

Joint ecoacoustic & camera-trapping project in Indonesia!

WildMon is super excited to be partnering with Planet Indonesia on a wildlife monitoring project combining bioacoustics and camera trapping in Kalimantan, Indonesia! In combining...

11 6

Hello Carly, 

Congratulations for this project!

I am studying right now a second MA in Environment Management. I would like to do my MA thesis project about these technologies (bioacoustics and camera traps). I wonder if you would be interested in a collaboration with me on that ? 

I already have a professional experience so I think that my contribution could be interesting for you. 

Thank you in advance for answering, 

Simon

See full post
discussion

Pytorch Wildlife v5 detection in thermal

Hope everyone had a nice Christmas. I'd like to share my experiment with Pytorch Wildlife V5 and a high res (640x512) thermal core, connected to a Raspberry Pi 5. The thermal...

1 2

I added plain old motion detection because megadetector v5 was not working well with the smaller rat images and in thermal.

This works really well:
 

Rat in thermal
 

Also, I can see really easily where the rat comes into the shed, see this series:

Before

Just visible

 

Just visible



Clearly visible.
 

Clearly visible

So now I have a way to build up a create thermal dataset for training :-)
 

See full post
discussion

Solar powered Rasyberry Pi 5

Hi,I would like to run a Raspberry Pi 5 off a battery in a mode where it will boot up and run for just 1 1/2 minutes every hour (To take some snapshots).Does anyone have any...

7 0

Lifepo4แบตเตอรี่ BMS 1S 3.2V 12A ชาร์จสั้นวงจรป้องกัน PCM  การกู้คืนด้วยตนเองสำหรับมอเตอร์ไฟฟ้า/LED Light - AliExpress

Actually, you can use any BMS as long as it supports 1S, and that will be sufficient. However, I recommend connecting the batteries in parallel to increase the capacity and using just one BMS. Everything should run through the power output of the BMS.

That said, don’t rely too much on BMS units from China, as they might not perform well in the long run. I suggest getting a high-quality one or keeping a spare on hand.

 

Don't they all come from China ?

The one in the picture I can find on aliexpress for 2.53 euros.

I'm not sure how you would get one that doesn't come from China.

In any case, I know what to search for now, that's very helpful. Thank you.

What is the 1S thing you mention above ?

If you have any links as to what you consider a high quality one that would be great!

Actually, you can source the product from anywhere, but I’m not very confident in the quality of products from China. That doesn’t mean products from China are bad—it might just be my bad luck. However, trust me, they can work just as well; you just need to ensure proper protection.

For 1S, it refers to the number of battery cells supported. A 1S configuration for LiFePO4 means a single battery cell with 3.6V. Of course, you must not confuse LiFePO4 with Li-Ion BMS, as the two types are not interchangeable and cannot be used together.

***However, you can use a power bank board designed for Li-Ion to draw power from the battery for use. But you must not charge the LiFePO4 battery directly without passing through a BMS; otherwise, your battery could be damaged.***

BMS 

Battery 

powerbank board

See full post
discussion

Christmas wish list

If anyone has something like this on their Christmas wish list let me know. It's my new camera product, a Raspberry Pi 5 running in secure boot mode with an encrypted nvme...

5 0

Yes, I have these. I'm pretty sure that my company is the first company in the world to sell products running on Raspberry Pi 5 in secure boot mode as well :-)

I responded to your wish list. From your description a modified camera like a HIKvision turret might be able to do it for you.

great, this security cameras might be interesting for monitoring crop development and maybe other bigger pests like boars or some other herbivorous animals that could eventually go into the fields, but this is not what I'm trying to solve now. What I also have interest in is on the go weeds species monitoring, like on a robot, to create a high resolution map of weed infestation in a field (10-20 hectares). There eventually such modified camera could make sense. Thanks for the ideas!!

See full post
discussion

Mirror images - to annotate or not?

We are pushing hard to start training our custom model for the PolarBearWatchdog! soon.This includes lots of dataset curation and annotation work.A question that has come up is...

18 0

I made a few rotation experiements with MD5b.

Here is the original image (1152x2048) :

When saving this as copy in photoshop, the confidence on the mirror image changes slightly:

and when just cropping to a (1152*1152) square it changes quite a bit: 

The mirror image confidence drops below my chosen threshold of 0.2 but the non-mirrored image now gets a confidence boost.

Something must be going on with overall scaling under the hood in MD as the targets here have the exact same number of pixels. 

I tried resizing to 640x640:

640x640

This bumped the mirror image confidence back over 0.2... but lowered the non-mirrored confidence a bit... huh!?

My original hypothesis was that the confidence could be somewhat swapped just by turning the image upside down (180 degree rotation):

Here is the 1152x1152 crop rotated 180 degrees:

The mirror part now got a higher confidence but it is interpreted as sub-part of a larger organism. The non-mirrored polar bear had a drop in confidence.

So my hypothesis was somewhat confirmed...

This leads me to believe that MD is not trained on many upside down animals .... 

 

Seems like we should include some rotations in our image augmentations as the real world can be seen a bit tilted - as this cropped corner view from our fisheye at the zoo shows.

See full post
discussion

Recommended lora receiver capable microcontrollers

If one wanted to build a microcontroller for receiving lora packets to go hand in hand with a lora point to point transmitter does anyone have any they can recommend?Any for the...

22 0

@ioanF , you can do a little bit better than 10 ma. I have here an adalogger rp2040 feather with DS3231 RTC wing and a I2S Mems microphone. During "dormant" mode running from xosc and waiting for DS3231 wakeup I get 4.7 ma . This includes about 1 ma for  microphone and DS323 together. OK, that is still 3 ma higher than 0.7 ma RP2040 documentation is said to claim. I guess, there is some uncertainty with  the AT35 USB tester I'm using. Putting the LDO enable pin (EN) to ground, the USB tester said 1 ma, which may be dominated by the offset of the tester as the LIPO charger as remaining component should only consume 0.2 ma (max).

Edit (for the records): After 'disabling' all GPIO but the RTC wakeup pin, the system (RP2040+DS3231 RTC+ I2S Mems microphone) consumes only 1.8 mA. I guess I`m now close to the clamed dormant power consumption!

A good candidate for low power hibernation and processing power is the Teensy 4.1 from PJRC, which is an ARM Cortex M7. standard clock is 600 MHz and there is provision for 16 MB PSRAM. It hibernates at 0.1 ma. What is more important than the specs, there is a very active (the best IMHO) community Search | Teensy Forum with direct connection to the developer (Paul Stoffregen). For a simple recorder consumption is 50% higher than RP2040 (Teensy running at 24 MHz and RP2040 running at 48 MHz, but RP2040 is M0+ and not a M7). 

Thanks! The Teensys are nice for processing power if choosing an external Lora board I’d say that’s a good choice. I started with teensies, there was a well supported code base and certainly well priced.

My preference is to one with onboard Lora. I’ve used one one onboard murata Lora before. It was good both for low power operation and it’s Lora operation.

For a transmitter the grasshopper if it’s still being made is quite good for small distances are being made because it has an onboard ceramic antennae. Which is good for about three houses away, although one was also received > 20 km away.

See full post
discussion

Individual Identification of Snow Leopard

Hello all, My Name is Raza Muhammad. I am currently working on the conservation of Snow Leopard in Pakistan, I have a lot of camera trap data. Please guide me on how to use AI to...

8 1

Hi Raza,

I manage a lot of snow leopard camera trap data also (I work for the Snow Leopard Trust). I am not aware of any AI solutions for snow leopards that can do all the work for us, but there are 'human-in-the-loop' approaches, in which AI suggests possible matches and a human makes the actual decision. Among these, are Hotspotter and the algorithm in WildMe. 

You can make use of Hotspotter in the software TrapTagger. I have found this software to be very useful when identifying snow leopards. Anecdotally, I think it improves the accuracy of snow leopard identifications. But, like I said, you still have to manually make the decisions; the results from Hotspotter are just a helpful guide.

The other cutting edge solutions mentioned here (e.g. MegaDescriptor, linked above) will require a massive dataset of labelled individuals. And considerable expertise in Python to develop your own solution. I had a quick look at the paper at they were getting around 60-70% accuracy for leopards, which is a much easier species than snow leopard. So I don't think this approach is useful, at least for now. Unless I've misunderstood something (others who deeply understand this, please chime in and correct me!).

Incidentally, I did try to gain access to WildMe / Whiskerbook last year but wasn't successful gaining an account. @holmbergius can you help me out? That would be appreciated, thanks!

Best of luck Raza, let me know if I can help more,

Ollie

 

An example of Hotspotter doing quite a good job, even with blurry images, to successfully draw attention to matching parts of the body. This is a screenshot from TrapTagger, which very helpfully runs Hotspotter across your images.

Note that I wouldn't necessarily recommend ID'ing snow leopards from such poor imagery, but this just demonstrates that Hotspotter can sometimes do quite well even on harder images.

Side-by-side comparison of two snow leopard camera trap images, showing coloured areas that the Hotspotter algorithm thinks are matching. The images are in black and white and the patterns are quite blurred due to the movement of the snow leopards.

 

 

Hi Raza,

As @ollie_wearn suggests, if think traptagger will be the easiest for you: 

You just have to follow the tutorial there:

The person in charge of Traptagger is also very responsive.

best,

See full post
discussion

GreenCrossingAI Project Update

Hello Wildlabs community! My name is Shawn Johnson and I am a research assistant for Dr. Karen Mager and Dr. Bernie Boscoe here at Southern Oregon University located in Ashland,...

3
See full post
discussion

MegaDetector V6 and Pytorch-Wildlife V1.1.0 !

Pytorch-Wildlife V1.1.0Hello everyone, we are happy to announce our release of Pytorch-Wildlife V1.1.0. In this release we have many new features including MegaDetectorV6, HerdNet...

5 4

Hello Patrick, thanks for asking! We are currently working on a bioacoustics module and will be releasing some time early next year. Maybe we can have some of your models in our initial bioacoustics model zoo, or if you don't have a model yet but have annotated datasets, we can probably train some models together? Let me know what you think! 

Thank you so much! We are also working on a bounding box based aerial animal detection model. Hopefully will release sometime early next year as well. It would be great to see how the model runs on your aerial datasets! We will keep you posted! 

Hi Zhongqi! We are finalizing our modelling work over the next couple of weeks and can make our work availabile for your team. Our objective is to create small (<500k parameters) quantized models that can run on low-power ARM processors. We have custom hardware that we built around them and will be deploying back in Patagonia in March 2025. Would be happy to chat further if interested!

We have an active submission to Nature Scientific Data with the annotated dataset. Once that gets approved (should be sometime soon), I can send you the figshare link to the repo. 

See full post
discussion

Instant Detect 2.0 and related cost

I am doing a research project on rhino poaching at Kruger National Park. I was impressed with the idea of Instant Detect 2.0. I do not know the cost involved with installing that...

6 0

Sam  any update on Instant Detect 2.0 - previously you mentioned that you hope to go into volume production by mid-2024?

I would love to also see a comparison between Instant Detect 2.0 and Conservationxlabs' Sentinel products if anyone has done comparisons.

Are there any other similar solutions currently on the market - specifically with the images over LoRa capability, and camera to satellite solution?

There's quite a few diy or prototype solutions described online and in literature - but it seems none of these have made it to market yet as generally available fully usable products. We can only hope. 

See full post
discussion

Inquiry About e-con Systems/arducam Cameras for Camera Trapping Projects

Hi all,I’m looking into using e-con Systems / Arducam cameras for my camera trapping research and wanted to get feedback from anyone who has experience with these models....

1 0

I think the big thing is power consumption. Commercial camera traps have a large power (current) dynamic range. That means they can often swing from ~0.1 mA to ~1000 mA of current within a few milliseconds. It's often difficult to replicate that in DIY systems which is why you don't see a lot of Raspberry Pi camera traps. The power consumption is often too high and the boot time is too long. 

One of the big challenges is powering down the system so that it's essentially in sleep mode and having it wake up in less than a second. That said, if you're mainly doing time lapse or don't have the strict speed requirements to wake up that quickly, it may make sense to roll your own camera trap.

Anyways, hope I'm not being too discouraging. It never hurts to give it a shot and please feed back your experiences to the forum. I'd love to hear reviews about Arducam and it's my first time hearing about e-con Systems. 

Akiba

See full post
discussion

AI Animal Identification Models

Hi Everyone,I've recently joined WILDLABS and I'm getting to know the different groups. I hope I've selected the right ones for this discussion...I am interested in AI to identify...

25 4

I trained the model tonight. Much better performance! mAP has gone from 73.8% to 82.0% and running my own images through anecdotally it is behaving better.

After data augmentation (horizontal flip, 30% crop, 10° rotation) my image count went from 1194 total images to 2864 images. I trained for 80 epochs.

inference results

Very nice!

I thought it would retain all the classes of YOLO (person, bike, etc) but it doesn't. This makes a lot of sense as to why it is seeing everything as a moose right now!

I had the same idea. ChatGPT says there should be possibilities though... 

You may want to explore more "aggressive" augmentations like the ones possible with 

to boost your sample size. 

Or you could expand the sample size by combining with some of the annotated datasets available at LILA BC like:

Cheers,

Lars

As others have said, pretty much all image models at least start with general-subject datasets ("car," "bird," "person", etc.) and have to be refined to work with more precision ("deer," "antelope"). A necessity for such refinement is a decent amount of labeled data ("a decent amount" has become smaller every year, but still has to cover the range of angles, lighting, and obstructions, etc.). The particular architecture (yolo, imagenet, etc.) has some effect on accuracy, amount of training data, etc., but is not crucial to choose early; you'll develop a pipeline that allows you to retrain with different parameters and (if you are careful) different architectures.

You can browse many available datasets and models at huggingface.co 

You mention edge, so it's worth mentioning that different architectures have different memory requirements and then, within the architecture, there will generally be a series of models ranging from lower to higher memory requirements. You might want to use a larger model during initial development (since you will see success faster) but don't do it too long or you might misestimate performance on an edge-appropriate model.

In terms of edge devices, there is a very real capacity-to-cost trade-off. Arduinos are very inexpensive, but are very underpowered even relative to, e.g., Raspberry Pis. The next step are small dedicated coprocessors such as the Coral line (). Then you have the Jetson line from NVidia, which are extremely capable, but are priced more towards manufacturing/industrial use. 

See full post
discussion

Discussing an Open Source Camera Trap Project

Hi everyone.  This conversation took place in the Sensors thread and I'm moving it over here since it's more relevant in the camera trap thread.  [Alasdair...

20 0
See full post
discussion

Southwest Florida - Trail Cam

Hi everyoneMy first post on this site - not sure if this is the proper group.I've placed a few trail cams in Florida. After 3 months and several birds I got this one....

1
See full post
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

10 1

I just noticed that TrapTagger has integrated AI reading of timestamps for videos. I haven't had a chance to try it out yet, but it sounds promising. 

Small update. I uploaded >1000 videos from a spypoint flex camera and TrapTagger worked really well. Another program that I'm currently interested in is Timelapse which uses the file creation date/time. I haven't yet tried it, but it looks promising as well. 

Hi Lucy,



I now realised it is an old thread and you most likely have already found a solution long ago but this might be of interest to others.

As mentioned previously, it is definitely much better to take moon phase from the date and location. While moon phase in general is not a good proxy for illumination, that moon phase symbol on the video is even worse as it generalises the moon cycle into a few discreet categories. For calculating moon phase you can use suncalc package in R but if you want a deeper look and more detailed proxy for moonlight intensity, I wrote a paper on it 

https://link.springer.com/article/10.1007/s00265-022-03287-2

 with accompanying R package called moonlit 

When it comes to temperature I also agree that what is recorded in the camera is often very inconsistent so unless you have multiple cameras to average your measurements you are probably better off using something like NCEP/NCAR Reanalysis (again, there is an R package for that) but if you insist on extracting temperature from the picture, I tried it using tesseract and wrote a description here: 

 

Good luck!

See full post
discussion

AI triggered camera trap - monkeys

I'd like to share the first monkeys captured by AI from far away in Lajuma research center. In co-operation with Nico Lubcker, the curator at Lajuma Research center South Africa....

7 0

This is chatgpt's answer:

"Here’s a rough idea of the number of parameters in different types of models commonly used in Edge Impulse:

  1. Neural Networks for Classification (e.g., MLP, CNN)
    • Small, shallow models: These can contain as few as 10,000 to 50,000 parameters when dealing with simple tasks like basic image recognition or sensor data classification.
    • Medium complexity models: For more advanced tasks, such as keyword spotting or more detailed image recognition, models may contain 50,000 to 200,000 parameters.
    • Larger models (Edge Optimized CNNs): For tasks like more complex object detection, models can go up to 200,000 to 500,000 parameters while still being optimized for edge devices.
  2. Recurrent Neural Networks (RNN, LSTM)
    • These are often used for time-series data like accelerometer or audio signals. A typical RNN model for edge deployment might have 10,000 to 100,000 parameters, depending on the length and complexity of the sequences being analyzed.
  3. Transfer Learning Models (e.g., MobileNet, EfficientNet variants)
    • Transfer learning models like MobileNet, when adapted for edge deployment, are typically pruned and quantized to reduce their size. These models may contain anywhere from 1 million to 3 million parametersin their full form, but when trimmed for edge deployment, they could be reduced to 100,000 to 1 million parameters.
  4. Classical Machine Learning Models (e.g., Decision Trees, KNN)
    • These models usually don’t have parameters in the same way as deep learning models, but they are lightweight and typically optimized for embedded use. They might only use a few kilobytes of memory or less, depending on the dataset size and number of features.

Overall, the model sizes in Edge Impulse are kept small to run efficiently on microcontrollers with limited memory (usually in the range of 256 KB to 1 MB of RAM and a few megabytes of flash). The number of parameters is carefully balanced to ensure good accuracy while maintaining small memory footprints for edge deployment."

Disclaimer: I can't guarantee that all of the above is not a hallucination.

So, not even 1 million parameters. yolov3 tiny model back in the day was 6 million parameters. It's my understanding that the idea behind edge impulse was that it's retrained up at the edge itself. Training requires a lot of memory than inference so by it's nature the models would be further constrained by this.

It's important to be aware of what your model is (How advanced it is), what size it is and how many parameters it contains when you consider your use case to manage your expectations. Later I will post you a few of the terrrible false positives that even the very best and largest models have made to give you an idea of just what AI is actually learning and by extension not learning. So if the best of the best can make terrible mistakes, what will small early models do?

In general though, a large advanced model can do much better at inference with a relatively small number of annotated training images than a really small model with potentially thousands and many more images.

For example, I decided way back in 2019 that yolov3 tiny model was absolutely useless for security because of the boy who cried wolf effect. 
 

The biggest use case I see for very small models would be as a hopefully better replacement to using a PIR sensor. I say hopefully, because both PIR sensors and very small models can have de-generate environmental conditions that cause large number of false positives. At a certain threshold, any perceived gain in power usage may disappear.

I welcome edge impulse to correct anything above that I may have said that was untrue as I personally have never used edge impulse, but I was involved in a project whereby yolov6 was pitted against and compared against a system using edge impulse.

Very impressive Kim! It actually took me a minute to find them in the first picture! Are they baboons (if they aren't then this is definitely proof that I shouldn't creat any training datasets!)? Can't wait for the thermal tests!!

Cheers,

Rob

Yes baboons. But this area has a few different primate species. In the weekend three species were present.

Yes, the thermal ones will be spectacular. I wondering what other species will jump into the picture when we start with that. The 640x512 thermal camera I'm testing is really amazing. Now I'm onto testing a 1280x1024 resolution thermal camera. That's going to be super interesting! As soon as I have some nice images I'll post them.

See full post
discussion

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

5 5

Hi Brett,this ocean lab sychelles is great to hear from you! I love your remote monitoring buoy project, especially the recycling of DFAD buoys and the design of custom-built ones that are full of interesting features. The expandability of the navigation buoys enabling it to be configured as hydrophones, cameras, or water quality monitors is a great feature. The technical marvels, such as recycling DFADs' circuits, making your own hydrophone, and designing a super-efficient system, are terrific. Moreover, it is encouraging to witness the deployment of your early system, particularly the live video and water quality data you have collected. You will definitely make a positive impact on the Wildlabs community with your knowledge of electrical engineering and software development. I care to hear more of your discoveries and any news that you will be sharing. Keep doing what you excel at!

Thank you for sharing! 

Hi Brett,

 

I am interested in developing really low cost multi channel underwater acoustic recorders.  Can you tell me a bit more about the board and stuff you were using for yours? You can get me at chris@wilcoanalytics.org.

 

Thanks,

 

Chris

See full post
discussion

Deepfaune v1.2!

 We have just released Deepfaune v1.2!! Deepfaune is a freely available software allowing to automatically classify species in camera-traps pictures or videos collected in...

2 4

Edit: SOLVED Thanks!

Thank you so much for this awesome work! I was trying to load the v2 model the same way as in classifTools.py:
model = timm.create_model(backbone="vit_large_patch14_dinov2.lvd142m", pretrained=False, num_classes=len(class_names))

ckpt = torch.load(weight_path='deepfaune-vit_large_patch14_dinov2.lvd142m.v2.pt', map_location=device)

state_dict = ckpt['state_dict']

new_state_dict = {k.replace('base_model.', ''): v for k, v in state_dict.items()}

model.load_state_dict(new_state_dict)

but it fails with this error:
Error(s) in loading state_dict for VisionTransformer:
       size mismatch for head.weight: copying a param with shape torch.Size([30, 1024]) from checkpoint, the shape in current model is torch.Size([26, 1024]).
       size mismatch for head.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([26]).
 

Are you using a different backbone for v2? I tried BACKBONE= '"vit_large_patch14_dinov2.lvd142m.v2" but that also doesn't work.

For the record now that this is here:

This error typically occurs when the wrong number of classes is given to timm.create_model. 
You should try to specify num_classes=30 manually (for this v1.2, number can change in future versions as we add new species).

Also, for issues please do reach out (as Jennifer did) by email, we will be much more responsive. More general questions/discussions can be asked here, I will reply asap. 

See full post
discussion

Your thoughts on this camera

Dear camera traps community,I hope this is the right place to share, and I believe this topic will be of interest to you.I’d like to introduce you to Tikee (edit...

7 1
See full post