Group

Edge Computing / Feed

This group unites those working at the intersection of edge AI and conservation, focusing on real-time, on-device data processing for environmental monitoring to facilitate sharing tools, models, and strategies to overcome challenges in remote, low-connectivity areas.

discussion

Mini AI Wildlife Monitor

Hi All!I've been working on various version of small AI edge compute devices that run object detection and Identification models for ecological monitoring!I've recently been...

10 1

Aloha Luke,

This is an amazing tool that you have created. Are your cameras availble for purchase? We use a lot of camera traps in the Hono O Na Pali Natural Area Reserve on Kauai to pasively detect animals. We do not have the staff knowledge and capacity to build our own camera traps like this.

See full post
discussion

Exploring the Wild Edge: A Proposal for a New WILDLABS Group

Over the past year, I’ve found myself returning again and again to one big question: How can we make conservation tech work where the wild really begins , where the signal...

58 17

This sounds like a great idea, this is an area that I want to do more work in,

 

Where can I sign up.

 

 

Hey Stuart,

Thank you for your interest! We're glad you'd like to be part of our journey. We're still in the process of setting up the group, and we'll let you know as soon as we're ready.

Thanks for your understanding! 🤗 

See full post
discussion

AI Edge Compute Based Wildlife Detection

Hi all!I've just come across this site and these forums and it's exactly what i've been looking for!I'm based in Melbourne Australia and since finishing my PhD in ML I've been...

17 1

Sorry, I meant ONE hundred million parameters.

The Jetson Orin NX has ~25 TOPS FP16 Performance, the large YOLOv6 processing 1280x1280 takes requires about 673.4 GFLOPs per inference. You should therefore theoretically get ~ 37fps, you're unlikely to get this exact number, but you should get around that...

Also later YOLO models (7+) are much more efficient (use less FLOPs for the same mAP50-95) and run faster.

Most Neural network inference only accelerators (Like Hailo's) use INT8 models and, depending on your use case, any drop in performance is acceptable. 

Ah I see, thanks for clarifying.

BTW yolov7 actually came out earlier than yolov6. yolov6 has higher precision and recall figures. And I noticed that in practise it was slightly better.

My suspicion is that it's not trival to translate the layer functions from yolov6 or yolov9 to hailo specific ones without affecting quality in unknown ways. If you manage to do it, do tell :)

The acceptability of a drop of performance depends heavily on the use case. In security if I get woken up 2x a night versus once in 6 months I don't care how fast it is, it's not acceptable for that use case for me.

I would imagine that for many wild traps as well a false positive would mean having to travel out and reset the trap.

But as I haven't personally dropped quantization to 8-bits I appreciate other peoples insights on the subject. Thanks for your insights.

@LukeD, I am looping in @Kamalama997 from the TRAPPER team who is working on porting MegaDetector and other models to RPi with the AI HAT+. Kamil will have more specific questions.

See full post
Link

acoupi: An Open-Source Python Framework for Deploying Bioacoustic AI Models on Edge Devices

New paper - "acoupi integrates audio recording, AI-based data processing, data management, and real-time wireless messaging into a unified and configurable framework. We demonstrate the flexibility of acoupi by integrating two bioacoustic classifiers...BirdNET and BatDetect2"

3
discussion

AI Animal Identification Models

Hi Everyone,I've recently joined WILDLABS and I'm getting to know the different groups. I hope I've selected the right ones for this discussion...I am interested in AI to identify...

25 4

I trained the model tonight. Much better performance! mAP has gone from 73.8% to 82.0% and running my own images through anecdotally it is behaving better.

After data augmentation (horizontal flip, 30% crop, 10° rotation) my image count went from 1194 total images to 2864 images. I trained for 80 epochs.

inference results

Very nice!

I thought it would retain all the classes of YOLO (person, bike, etc) but it doesn't. This makes a lot of sense as to why it is seeing everything as a moose right now!

I had the same idea. ChatGPT says there should be possibilities though... 

You may want to explore more "aggressive" augmentations like the ones possible with 

to boost your sample size. 

Or you could expand the sample size by combining with some of the annotated datasets available at LILA BC like:

Cheers,

Lars

As others have said, pretty much all image models at least start with general-subject datasets ("car," "bird," "person", etc.) and have to be refined to work with more precision ("deer," "antelope"). A necessity for such refinement is a decent amount of labeled data ("a decent amount" has become smaller every year, but still has to cover the range of angles, lighting, and obstructions, etc.). The particular architecture (yolo, imagenet, etc.) has some effect on accuracy, amount of training data, etc., but is not crucial to choose early; you'll develop a pipeline that allows you to retrain with different parameters and (if you are careful) different architectures.

You can browse many available datasets and models at huggingface.co 

You mention edge, so it's worth mentioning that different architectures have different memory requirements and then, within the architecture, there will generally be a series of models ranging from lower to higher memory requirements. You might want to use a larger model during initial development (since you will see success faster) but don't do it too long or you might misestimate performance on an edge-appropriate model.

In terms of edge devices, there is a very real capacity-to-cost trade-off. Arduinos are very inexpensive, but are very underpowered even relative to, e.g., Raspberry Pis. The next step are small dedicated coprocessors such as the Coral line (). Then you have the Jetson line from NVidia, which are extremely capable, but are priced more towards manufacturing/industrial use. 

See full post
discussion

Questions for Biologists relating to system requirements in acoustic research

I'm looking for more information about the specific needs in conservation biology and other fields that use acoustic monitoring for research. I've reviewed various discussions,...

21 1

Hey @jamie_mac

These are FPODs from Chelonia was an exceptional read. I love the implementation and would have a great time developing systems like that. Very clever method for low power detection and would provide an ultra-fast turnaround for recording, where AI would take an entire detection window timescale to start recording. It does also make sense that marine environments would be an exceptionally difficult place for species detection.

I am interested in your idea of an OS marine ARU system. I have been doing a little research on marine acoustics and have a few ideas. This seems right up my alley. If you'd be interested in playing around with the idea and seeing if we can make something that works, I could likely make quick work with this

 

Hi Morgan.

We are actually at the very early stages of developing a new OS marine PAM device. This is a side project (read no funding currently) but if you're interested, I'd be happy to have a chat about what we are doing? 

See full post
discussion

MegaDetector on Edge Devices ??

Hi all.  I'm a relatively new member of the community and have been trying to consume the many excellent videos, discussions and resources before asking questions. ...

7 1

Has anyone tried running the MegaDetector model through an optimizer like Amazon SageMaker Neo? It can reduce the overall memory footprint and possibly speed up inference on devices like Raspberry Pi and Jetson.

Great work Luke @sheneman! Having a relatively lightweight bit of hardware to run MDv5 on will open up opportunities for many more people. The upfront financial cost of a Jetson Nano is an order of magnitude less than a computer with a beefy GPU. 

I haven't used any of the Nvidia edge devices yet. Do you think it would be possible for someone to make a disk image with MDv5 pre installed to lower the entry barrier of learning a new system, installing software, environments and packages etc.?

A difficulty I have seen for some projects is not having access to or not having internet bandwidth to utilize cloud compute services. If anyone needs to churn through camera trap image processing in a remote field station this may now be possible!

Great work, unfortunately, I'm not familiar with programming but computer friendly enough to follow a good tutorial. I was wondering if you will share your findings?

Thanks for the work.

See full post
event

Event: Arm’s AI Virtual Tech Sessions

Arm
Want to sharpen your machine learning skills and get advice from experts on getting started with tinyML development? Arm's new AI Virtual Tech Sessions for Software Developers will walk you through a series of demos and...

0
See full post
discussion

ML at the Edge

There's a discussion over in camera traps about the design of a device to run autonomously (in an inaccessible location) with reliable power (solar) but low bandwidth,...

0
See full post