Global Feed

There's always something new happening on WILDLABS. Keep up with the latest from across the community through the Global view, or toggle to My Feed to see curated content from groups you've joined. 

Header image: Laura Kloepper, Ph.D.

Link

SpeciesNet: first impressions from 37k images in Namibia

I put together some initial experiences deploying the new SpeciesNet classifier on 37,000 images from a Namibian camera trap dataset and hope that sharing initial impressions might be helpful to others.

3
discussion

What software to use?

I am looking for a reliable camera trap software to process images efficiently. Key considerations:✅ Handles large datasets smoothly✅ Allows for multiple people...

7 0
See full post
discussion

Drone & ai use for uncovering illegal logging camps

Hi all,I am working with WCS in Cambodia, and am curious as to whether anyone has used combinations of drone / ai / radar tech to uncover illegal logging camps in forest...

1 0

Hi Adam! 

Sounds like you have your work cut out for you. I have not used radar systems or AI systems for this sort of detection, but there are methods using change detection models to visualise changes in forests where logging may be occuring between different dates using drone photogrammetry and GIS software. I have found these methods very effective when monitoring deforestation, especially because not only can you quickly visualise where deforestation has happened, but you can also quantify the damage at the same time. Let me know if you would like to learn more.

 

Kind regards

Sean Hill

See full post
discussion

WILDLABS AWARDS 2024 - Enhancing Pollinator Conservation through Deep NeuralNetwork Development

Greetings Everyone, We are so excited to share details of our WILDLABS AWARDS project "Enhancing Pollinator Conservation through Deep Neural Network Development" and...

6 5

Great work! Do you think the night time models also worked better due to lack of interference from shadows being counted? or maybe issues around a non-standard background. 

If it helps, I believe the creators of InsectDetect which is open source, did a lot of work training their model to differentiate insect shadows vs. insects. Also after testing their smart trap on flowers, went with a standardised, non-lethal attractive background.

InsectDetect: Build your own insect-detecting camera trap!

A .gif of a smart insect trap interface, using object detection to identify and track insects in real time. The background is a stylized image with large flower-like shapes simulating a floral or crop environment.
See full post
careers

WILDLABS-WWF Graduate Intern

WILDLABS Team and 1 more
Come work with WILDLABS at WWF-US! We are seeking a graduate intern to support our State of Conservation Technology research. The role will be focused on delivering a 5-year trends report capturing the evolution of the...

2
See full post
discussion

¡Bienvenidos a la comunidad Latinoamericana! // Welcome to the Latin American community!

¡Hola a todos y todas!Estamos super contentos de comenzar este espacio donde podamos conectar, compartir ideas y conocimientos, y colaborar en proyectos que ayuden a proteger...

14 10

Hola. Me gusto la idea! prefiero en español, mas que nada en todo este tema de informatica, etc, que ya posee un alto grado de dificultad. Aca en Argentina estoy trabajando con anfibios y grabadoras digitales automaticas, ahora comenzando a explorar sobre detección automatica o semiautomatica. 

See full post
Link

Overview of terrestrial audio recorders with associated metadata info

This paper includes, in its supplementary materials, a helpful table comparing several acoustic recorders used in terrestrial environments, along with associated audio and metadata information.

1
event

Geospatial Group Café (April)

Are you ready to dive into the world of geospatial technology for conservation? Join our vibrant community for an exciting virtual event where we explore cutting-edge geospatial datasets, tools and innovative projects....

3 4
Hi Folks, Don't forget to register for the Geospatial Cafe!Best,Vance
Agenda:Vance Russell (3point.xyz)Leanne Tough (Research Officer, Wetland Landscapes and Processes - WWT)Dr Kuria Thiongo (Director, Centre for Spatial...
See full post
discussion

Conservation Applications for Google Solar API

We are developing a tool to research about possible conservation applications of the recent Google Solar API.Solar API is a Google service to provide high...

2 1

Do you know if there are plans to expand this beyond the global North? Many folks in this community are living and/or working outside of the areas where this dataset is limited to and thus would not be able to make use of it. 

Yes, I know about this big limitation,

As far as I know they are working to increase the coverage available for this solution.

For trusted developers, there are more regions available.





More information about the coverage can be found here:

 



 

See full post
discussion

DeepFaune v.1.3 is out!

Hi, just wanted to let whoever is interested that v.1.3 of DeepFaune is out! Deepfaune is a software that runs on any standard computer and allow you to identify species in...

3
See full post
discussion

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings...

20 8

Hello!

I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?

 

s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')

Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')

full_time = Sxx.shape[1] # number of time samples in spectrogram

j_count = 5 # number of chunks we want to break it into

window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk

ACI_tot = 0

for i in range(j_count):

_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])

ACI_tot = ACI_tot + int(ACI)

This gives ACI_tot = 1516

Hi all, 

I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...

Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.

I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.

Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.

Any advice would be greatly appreciated, thanks!

Kind regards,

Jonathan

 

New stable release : v1.5.1

We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.

In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.



 

See full post