discussion / Camera Traps  / 4 October 2024

Deepfaune v1.2!

🥳  We have just released Deepfaune v1.2!! Deepfaune is a freely available software allowing to automatically classify species in camera-traps pictures or videos collected in Europe. It runs on a standard computer - no need to upload your

pictures to a cloud service! On Windows, it's a simple .exe file to install, as any other software. Linux/Mac users need to 

install a few dependencies, see website. You can download the software here: https://deepfaune.cnrs.fr

 

This v1.2 brings the following improvements:

  • Species added: beaver, fallow deer, otter and raccoon - so now over 30 taxa can be classified.
  • Improved classification performance 
  • Improved interface: play videos and picture sequences, adjust contrast for visual inspection, get media infos and more. It now looks like this:

Image

It also now exports Top1 predicted class even for medias ultimately classified as 'undefined' because of having a low confidence threshold. Allows you to work with full-AI results and no human intervention in case that works for you.

We need your feedback! (Contact on the website). Let us know if you use it, what works/don't work for you, we will listen, we are making it for you! An in case you have pictures/videos to share with us for training, please reach out - we are always on the search!

 

 




Jennifer Zhuge
@jzhuge
Wildlife Protection Solutions (WPS)
Software Engineer at Wildlife Protection Solutions :D. My career goal is to use AI to restore the Earth's biodiversity. In my work I value efficiency, innovation, and sustainability.

Edit: SOLVED Thanks!

Thank you so much for this awesome work! I was trying to load the v2 model the same way as in classifTools.py:
model = timm.create_model(backbone="vit_large_patch14_dinov2.lvd142m", pretrained=False, num_classes=len(class_names))

ckpt = torch.load(weight_path='deepfaune-vit_large_patch14_dinov2.lvd142m.v2.pt', map_location=device)

state_dict = ckpt['state_dict']

new_state_dict = {k.replace('base_model.', ''): v for k, v in state_dict.items()}

model.load_state_dict(new_state_dict)

but it fails with this error:
Error(s) in loading state_dict for VisionTransformer:
       size mismatch for head.weight: copying a param with shape torch.Size([30, 1024]) from checkpoint, the shape in current model is torch.Size([26, 1024]).
       size mismatch for head.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([26]).
 

Are you using a different backbone for v2? I tried BACKBONE= '"vit_large_patch14_dinov2.lvd142m.v2" but that also doesn't work.