Group

AI for Conservation / Feed

Artificial intelligence is increasingly being used in the field to analyse information collected by wildlife conservationists, from camera trap and satellite images to audio recordings. AI can learn how to identify which photos out of thousands contain rare species; or pinpoint an animal call out of hours of field recordings - hugely reducing the manual labour required to collect vital conservation data.

discussion

PhD Advice

Hey Everyone!First post here. I'll be graduating soon with a Master's degree in Fisheries, Wildlife, and Conservation Biology from NC State and am looking to continue my education...

2 0

Hi @ethanmarburger, I am probably not the best to give advice here given that it took me nearly two decades to actually finish my PhD, but I'd aim for something that you are really interested in so that you can keep up a high momentum. If you love your project you are more likely to cruise through the 'grind' periods. In terms of networking, WILDLABS is definitely a great place to start! You may well find some connections just looking across the threads here, and reaching out to people that are doing work you are interested in. More broadly, and depending on where you are in the world, you might be able to volunteer or even get some work on projects in your area, which can be a good way to get a foot in the door to larger research projects. You could possibly look at helping out on some analysis of spatial/AI datasets etc, or reach out to not-for-profits and conservation charities and see what they need/you might be able to help with - but try and be as specific as possible so they know straight away what you are after. Just a few quick ideas off the top of my head, and more than happy to discuss further. My best for your search!

Cheers,

Rob

Hi Ethan, It's indeed a competitive area. My advice for you (and anybody else seeking a PhD supervisor)...

  1. Do background research on each individual potential supervisor and always approach them demonstrating your alignment with their research focus.
  2. Show that you have read and understood one or two of their key (relevant) papers in your initial email to them.
  3. Have in mind something relevant to you AND to the potential supervisor, to propose as a topic in your initial email to them. But, remain open to their ideas - there's a good chance they have something that would align with your interests and that would (more) smoothly generate a successful PhD than you might have come up with ;-)
  4. Write clearly and succinctly.
  5. Demonstrate enthusiasm and highlight any relevant past experience and engagement in the relevant area (briefly).
  6. Attach a PDF CV.
  7. Apply for PhD positions in areas where you are qualified.
  8. Evidence that you have published a good paper, especially as first author, from your Masters thesis would be a bonus.

This is a time-consuming process. But you may end up spending 3+ years working with this supervisor, and vice versa. It's important for all concerned that you (and they) make a good, informed decision.

Good luck in your search!

Alan.

 

 

See full post
discussion

Documentary on Conservation

My name is Nick Rizzini, and I’m a London-based filmmaker currently working on a documentary focused on wildlife conservation. A section of the project aims to explore...

3 0

Hey Nick,

Sounds like an interesting film project. A small company I always really liked is OpenForests.com
They do a great job on using remotely gathered images to monitor forest projects and their results, and a lot more I guess.

 

Good luck!

Sven

Would definitely recommend reaching out to Nature Tech Collective - an industry coalition of nature tech start-ups, orgs, companies. It's an awesome community and you'll be spoiled for choice on entities to engage with! 

Hi Nick,

At Wildlife.ai, from the other side of the world, we would be happy to chat with you. PM if interested 

Victor

 

See full post
discussion

Software QA Topics

Hi everyone,What should we share or demo about Software Quality Assurance? Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to...

1 3

Hi everyone,

What should we share or demo about Software Quality Assurance? 

Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to do a community call to knowledge share on software testing and test automation in the 3rd or 4th week of January.

We've listed a few QA topics that we could talk about in this 1-2 minute poll here basketball stars and would like your feedback on topic priority.

Thanks for your feedback and we look forward to connecting! We'll also post when we have an exact date and time pinned down.

Sounds like a great initiative—looking forward to it! I’d love to hear more about your real-world test automation setup, especially any tools or frameworks you’ve found effective at WPS. It’d also be helpful to see how QA fits into your dev workflow and any challenges you’ve faced specific to conservation tech. I just filled out the poll and can’t wait to see what topics get chosen. Thanks, Alex and team, for organizing this!

See full post
discussion

Prospective NSF INTERN 

Hello all,My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning...

1 2

My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning powered passive acoustic monitoring focusing on wild Bornean orangutans (and other primates). I am reaching out because as a student with a National Science Foundation Graduate Research Fellowship, I am eligible to take advantage of the NSF INTERN program which supports students to engage in non-academic internships through covering a stipend and other expenses, with the only caveat being that the internship must be in-person and not remote. I was wondering if any organizations in conservation technology would be interested in a full-time intern that would be coming in with their own funding? 

In addition to experience with machine learning and acoustics through training a convolutional neural network for my research, I also have worked with GIS, remote sensing, and animal movement data through other projects. Further, I have experience in community outreach both in and outside of academic settings, as I previously worked for the Essex County Department of Parks and Recreation in New Jersey for 3 years where I created interpretive signs, exhibits, newsletters, brochures, and social media posts. Now while doing my fieldwork in Indonesia, I have led hands-on trainings in passive acoustic monitoring placement and analysis as well as given talks and presentations at local high schools and universities. 

I would love to be able to use this opportunity (while the funding still exists, which is uncertain moving forward due to the current political climate in the US) to exercise and develop my skills at a non-academic institution in the conservation technology sphere! If anyone has any suggestions or is part of an organization that would be interested in having me as an intern, please contact me here or via my email: fshort@bu.edu geometry dash. Thank you!

Hi Frank, your work sounds incredibly valuable and well-aligned with current needs in conservation tech. With your strong background in machine learning, acoustics, GIS, and outreach, you’d be an asset to many organizations. I’d recommend looking into groups like Rainforest Connection, Wildlife Acoustics, or the Conservation Tech Directory (by WILDLABS)—they often work on acoustic monitoring and might be open to in-person internships, especially with funding already in place. Best of luck finding the right match—your initiative is impressive!

See full post
discussion

Counting Problems in Conservation

We're actively exploring new applications for CountGD, our object counting model designed to automatically count instances in images. So far, we've partnered with...

7 3
See full post
discussion

'Boring Fund' Workshop: AI for Biodiveristy Monitoring in the Andes

Thanks to WILDLABS 'Boring Fund' support, we are hosting a workshop on AI for biodiversity monitoring in Medellin, Colombia, April 21st to 24th. This is a followup discussion to...

4 14

Hey @benweinstein , this is really great. I bet there are better ways to find bofedales (puna fens) currently than what existed back in 2010. I'll share this with the Audubon Americas team.  

Hi everyone, following up here with a summary of our workshop!

The AI for Biodiversity Monitoring workshop brought together twenty-five participants to explore uses of machine learning for ecological monitoring. Sponsored by the WILDLABS ‘Boring Fund’, we were able to support travel and lodging for a four-day workshop at the University of Antioquia in Medelín, Colombia. The goal was to bring together ecologists interested in AI tools and data scientists interested in working on AI applications from Colombia and Ecuador.  Participants were selected based on potential impact on their community, their readiness to contribute to the topic, and a broad category of representation, which balanced geographic origin, business versus academic experience, and career progression.

Before the workshop began I developed a website on github that laid out the aims of the workshop and provided a public focal point for uploading information. I made a number of technical videos, covering subjects like VSCODE + CoPilot, both to inform participants, as well as create an atmosphere of early and easy communication. The WhatsApp group, the youtube channel (link) of video introductions, and a steady drumbeat of short tutorial videos were key in establishing expectations for the workshop.

The workshop material was structured around data collection methods, Day 1) Introduction and Project Organization, Day 2) Camera Traps, Day 3) Bioacoustics, and Day 4) Airborne data. Each day I asked participants to install packages using conda, download code from github, and be active in supporting each other solving small technical problems. The large range of technical experience was key in developing peer support. I toyed with the idea of creating a juypterhub or joint cloud working space, but I am glad that I resisted; it is important for participants to see how to solve package conflicts and the many other myriad installation challenges on 25 different laptops.

We banked some early wins to help ease intimidation and create a good flow to technical training. I started with github and version control because it is broadly applicable, incredibly useful, and satisfying to learn. Using examples from my own work, I focused on github as a way both to contribute to machine learning for biology, as well as receive help. Building from these command line tools, we explored vscode + copilot for automated code completion, and had a lively discussion on how to balance utility of these new features with transparency and comprehension.  

Days two, three and four flew by, with a general theme of existing foundational models, such as BirdNET for bioacoustics, Megadetector for Camera traps, DeepForest for airborne observation. A short presentation each morning was followed by a worked python example making predictions using new data, annotation using label-studio, and model developing with pytorch-lightning. There is a temptation to develop jupyter notebooks that outline perfect code step by step, but I prefer to let participants work through errors and have a live coding strategy.  All materials are in Spanish and updated on the website. I was proud to see the level of joint support among participants, and tried to highlight these contributions to promote autonomy and peer teaching. 

Sprinkled amongst the technical sessions, I had each participant create a two slide talk, and I would randomly select from the group to break up sessions and help stir conversation. I took it as a good sign that I was often quietly pressured by participants to select their talk in our next random draw. While we had general technical goals and each day had one or two main lectures, I tried to be nimble, allowing space for suggestions. In response to feedback, we rerouted an afternoon to discuss biodiversity monitoring goals and data sources. Ironically, the biologists in the room later suggested that we needed to get back to code, and the data scientists said it was great. Weaving between technical and domain expertise requires an openness to change.

Boiling down my takeaways from this effort, I think there are three broad lessons for future workshops.

  • The group dynamic is everything. Provide multiple avenues for participants to communicate with each other. We benefited from a smaller group of dedicated participants compared to inviting a larger number.
  • Keep the objectives, number of packages, and size of sample datasets to a minimum.
  • Foster peer learning and community development. Give time for everyone to speak. Step in aggressively as the arbiter of the schedule in order to allow all participants a space to contribute.

I am grateful to everyone who contributed to this effort both before and during the event to make it a success. Particular thanks goes to Dr. Juan Parra for hosting us at the University of Antioquia, UF staff for booking travel, Dr. Ethan White for his support and mentorship, and Emily Jack-Scott for her feedback on developing course materials. Credit for the ideas behind this workshop goes to Dr. Boris Tinoco, Dr. Sara Beery for her efforts at CV4Ecology and Dr. Juan Sebastian Ulloa. My co-instructors Dr. Jose Ruiz and Santiago Guzman were fantastic, and I’d like to thank ARM through the WILDLABS Boring fund for its generous support.    

See full post
discussion

AI Edge Compute Based Wildlife Detection

Hi all!I've just come across this site and these forums and it's exactly what i've been looking for!I'm based in Melbourne Australia and since finishing my PhD in ML I've been...

17 1

Sorry, I meant ONE hundred million parameters.

The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...

Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.

Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable. 

Ah I see, thanks for clarifying.

BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.

My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)

The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.

I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.

But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.

@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.

See full post
discussion

Animal Detect is live

Hey everyone,WE ARE FINALLY LIVE!  After 8 months of hard work with @HugoMarkoff we are ready to present the first stable version of Animal Detect!Animal Detect is ...

1 4

Super happy to finally have Animal Detect ready for people to use. We are open for any feedback and hope to bring more convenient tools :) 

See full post
discussion

We are releasing SpeciesNet

We're extremely excited to announce (and open source) SpeciesNet today, our AI model for species recognition in camera trap images, capable of recognising more than 2000 animals...

21 20

This is great news!



I am using rather high resolution images and have just ordered some 4K (8MP) camera traps.

The standard megadetector run via Addax AI is struggling a bit with detecting relatively small animals (frame wise) although they have quite a number of pixels. This naturally follows from the resizing in megadetector.

I have noticed :

but this seem not readilly available in Addax AI. Is it somehow supported in SpeciesNet?

Cheers,

Lars

Hi Ștefan

In my current case, I am trying to detect and count Arctic fox pups. Unfortunately, Arctic fox does not seem to be included in the training data of SpeciesNet but even if it was, pups look quite different from adults. 

After a quick correspondance with Dan Morris and Peter van Lunteren on the Addax AI gitHub I was made aware of the image size option of MegeDetector. It seem to help somewhat to run the detection at full resolution (in my case up to 1920*1080). I have the impression that I get more good detections, but also less false detections (even without repeat_detection_elimination) by using higher resolution.

Dan offered to have a look at my specific challenge so I sent him 10K+ images with fox pups.

See full post
discussion

No-code custom AI for camera trap images!

Thanks to our WILDLABS award, we're excited to announce that Zamba Cloud has expanded beyond video to now support camera trap images! This new functionality...

3 3

When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.

We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.

Hope that helps clear up the confusion. Happy to go on and on (and on)...

See full post
discussion

Drone & ai use for uncovering illegal logging camps

Hi all,I am working with WCS in Cambodia, and am curious as to whether anyone has used combinations of drone / ai / radar tech to uncover illegal logging camps in forest...

1 0

Hi Adam! 

Sounds like you have your work cut out for you. I have not used radar systems or AI systems for this sort of detection, but there are methods using change detection models to visualise changes in forests where logging may be occuring between different dates using drone photogrammetry and GIS software. I have found these methods very effective when monitoring deforestation, especially because not only can you quickly visualise where deforestation has happened, but you can also quantify the damage at the same time. Let me know if you would like to learn more.

 

Kind regards

Sean Hill

See full post
discussion

WILDLABS AWARDS 2024 - Enhancing Pollinator Conservation through Deep NeuralNetwork Development

Greetings Everyone, We are so excited to share details of our WILDLABS AWARDS project "Enhancing Pollinator Conservation through Deep Neural Network Development" and...

6 5

Great work! Do you think the night time models also worked better due to lack of interference from shadows being counted? or maybe issues around a non-standard background. 

If it helps, I believe the creators of InsectDetect which is open source, did a lot of work training their model to differentiate insect shadows vs. insects. Also after testing their smart trap on flowers, went with a standardised, non-lethal attractive background.

InsectDetect: Build your own insect-detecting camera trap!

A .gif of a smart insect trap interface, using object detection to identify and track insects in real time. The background is a stylized image with large flower-like shapes simulating a floral or crop environment.
See full post
discussion

AI/ML opportunities

Hello,I’ll be graduating with a masters in AI and machine learning in August, and I’m currently doing my industry project with Aurizn, where I’m segmenting high resolution...

1 0

Ritika, 

All the best! I hope someone provides a more substantive answer! 

I have also graduated with masters in AI and ML recently. Difference being I am at the end of my IT career. I am looking for a career switch to biodiversity, wildlife conservation, sustainability or climate change. 

I am trying to do my best to do modern job search. Just warming up to it. LinkedIn, posting relevant posts, being consistent. Virtual networking. In person networking. Being a soon to be fresh graduate, you have access to a huge student networking and academic circle. Keep hitting them consistently and I am sure you will find something. 

Share the good news when it happens. :)

See full post
discussion

Dual-/Multi-Use Technology Strategies

Hi Everyone, I am new to the WildLabs community and relatively new to conservation technology. I have been working in this space since 2018 (marine and coral focused with NOAA),...

9 1

That is a great point and the current international trade climate has been making supply chain even more difficult. This also deeply affects US companies given much of the US goods manufacturing and assembly happening in China. Over the last few years, I have been seeing US hardware companies (e.g. drone platform and component OEMs) sourcing their goods from India, Turkey, Canada, and more recently in African and South American nations. Because of the last 3-to-5 years of increasingly restrictive and costly international hardware trade, there has been a emergence of specialized component manufacturers internationally. For European companies interested in providing hardware services to the US, I would suggest diversifying the supply chain beyond China. Given the current climate and trends, that added supply chain resilience may be a good idea, regardless of work with the US.

This is more than the supply chain though. The point was the company itself cannot use any tech for anything from the 5x companies. So in my case my ISP is incompatible. Essentially I see the only companies making that kind of sacrifice are ones that want to devote themselves to defence only.


Of course. That’s US defense as a customer. European defence is fully on the table.


It’s just sad that it’s not restricted to defence. US government wildlife organisations cannot buy European tech unless that European company was pure in their eyes.

True, the US ecosystem is a challenging space right now, for basically all sectors. 

We should not let the US chaos prevent us from engaging with opportunities in other nations' multi-use markets. A company's ability and journey to tap into other markets is very unique to them (product, team, finances, infrastructure, agility), and some simply cannot adapt. There is no one size fits all (or even most) solution when it comes to multi-use strategies. It is important that  we are systematic about evaluating the cost to adapt our product-service to a different market, and the value of new opportunities in that new market, without losing track of underlying conservation and social good needs.

See full post
discussion

DeepFaune v.1.3 is out!

Hi, just wanted to let whoever is interested that v.1.3 of DeepFaune is out! Deepfaune is a software that runs on any standard computer and allow you to identify species in...

3
See full post
discussion

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings...

20 8

Hello!

I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?

 

s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')

Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')

full_time = Sxx.shape[1] # number of time samples in spectrogram

j_count = 5 # number of chunks we want to break it into

window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk

ACI_tot = 0

for i in range(j_count):

_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])

ACI_tot = ACI_tot + int(ACI)

This gives ACI_tot = 1516

Hi all, 

I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...

Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.

I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.

Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.

Any advice would be greatly appreciated, thanks!

Kind regards,

Jonathan

 

New stable release : v1.5.1

We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.

In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.



 

See full post
article

Application of computer vision for off-highway vehicle route detection: A case study in Mojave desert tortoise habitat

Driving off-highway vehicles (OHVs), which contributes to habitat degradation and fragmentation, is a common recreational activity in the United States and other parts of the world, particularly in desert environments...

4 1
Fantastic and thank you! It will be very interesting to see how these tools can be applied to other species and anthropomorphic features!
See full post