Group

Open Source Solutions / Feed

This group is for anyone interested in open-source technologies for ecology and conservation. We welcome contributions from both makers and users, whether active or prospective. Here, we believe in the need for open-source hardware and software to do good science and research. It is a place to share novel or existing technologies, exchange resources, discuss new projects, ask for advice, find collaborators, and advocate for adopting open-source technologies.

discussion

Camera Trap Data Visualization Open Question

Hi there,I would like to get some feedback, insight into how practitioners manage and visualize their camera trap data.We realized that there exists already many web based...

6 0

Hey Ed! 

Great to see you here and thanks a lot for your thorough answer.
We will be checking out Trapper for sure - cc @Jeremy_ ! A standardized data exchange format like Camtrap DP makes a lot of sense and we have it in mind to build the first prototypes.
 

Our main requirements are the following:

  • Integrate with the camtrap ecosystem (via standardized data formats)
  • Make it easy to run for non technical users (most likely an Electron application that can work cross OSes).
  • Make it useful to explore camtrap data and generate reports

 

In the first prototyping stage, it is useful for us to keep it lean while keeping in mind the interface (data exchange format) so that we can move fast.


Regards,
Arthur

Quick question on this topic to take advantage of those that know a lot about it already. So once you have extracted all your camera data and are going through the AI object detection phase which identifies the animal types. What file formation that contains all of the time + location + labels in the photos data do the most people consider the most useful ? I'm imagining that it's some format that is used by the most expressive visualization software around I suppose. Is this correct ?

A quick look at the trapper format suggested to me that it's meta data from the camera traps and thus perform the AI matching phase. But it was a quick look, maybe it's something else ? Is the trapper format also for holding the labelled results ? (I might actually the asking the same question as the person that started this thread but in different words).

Another question. Right now pretty  much all camera traps trigger on either PIR sensors or small AI models. Small AI models would tend to have a limitation that they would only accurately detect animal types and recognise them at close distances where the animal is very large and I have question marks as to whether small models even in these circumstances are not going to make a lot of classification errors (I expect that they do and they are simply sorted out back at the office so to speak). PIR sensors would typically only see animals within say 6m - 10m distance. Maybe an elephant could be detected a bit further. Small animals only even closer.

But what about when camera traps can reliably see and recognise objects across a whole field, perhaps hundreds of meters?

Then in principle you don't have to deploy as many traps for a start. But I would expect you would need a different approach to how you want to report this and then visualize it as the co-ordinates of the trap itself is not going to give you much  information. We would be in a situation to potentially have much more accurate and rich biodiversity information.

Maybe it's even possible to determine to a greater degree of accuracy where several different animals from the same camera trap image are spatially located, by knowing the 3D layout of what the camera can see and the location and size of the animal.

I expect that current camera trap data formats may fall short of being able to express that information in a sufficiently useful way, considering the in principle more information available and it could be multiple co-ordinates per species for each image that needs to be registered.

I'm likely going to be confronted with this soon as the systems I build use state of the art large number of parameter models that can see species types over much greater distances. I showed in a recent discussion here, detection of a polar bear at a distance between 130-150m.

Right now I would say it's an unknown as to how much more information about species we will be able to gather with this approach as the images were not being triggered in this manner till now. Maybe it's far greater than we would expect ? We have no idea right now.

See full post
discussion

AI for Bird and Bat Recognition

Hi everyone,I'm working on a project involving the automatic recognition of bats and birds from their audio recordings in Italy, with a possibile evolution also in other european...

1 1

Hi Lorenzo,

I highly recommend the OpenSoundscapes package (developed by the Kitzes Lab at U Pittsburgh) - there are workflows to build your own CNNs there, the documentation is really thorough, and the team are very responsive to inquiries. They also have a bioacoustics 'model zoo' that lists relevant models. The Perch model from Google would be good to look into as well.

Some recent papers I've seen that might also be worth checking out -

Hope that helps a bit!

See full post
Link

acoupi: An Open-Source Python Framework for Deploying Bioacoustic AI Models on Edge Devices

New paper - "acoupi integrates audio recording, AI-based data processing, data management, and real-time wireless messaging into a unified and configurable framework. We demonstrate the flexibility of acoupi by integrating two bioacoustic classifiers...BirdNET and BatDetect2"

3
discussion

Free/open-source app for field data collection

Hi all, I know something similar was asked a year ago but I'd like some advice on free applications or software for collecting data in the field on an Android device (for eventual...

11 2

Thanks! Essentially field technicians, students, researchers etc. go out into the field and find one of our study groups and from early in the morning until evening the researchers record the behaviour of individual animals at short intervals (e.g., their individual traits like age-sex class, ID, what the animal is doing, how many conspecifics it has within a certain radius, what kind of food the animal is eating if it happens to be foraging). Right now in our system things work well but we are using an app that is somewhat expensive so we want to move towards open-source

See full post
discussion

Underwater wireless communication

Does anyone have any knowledge/experience of underwater wireless communication (i.e. sensor to surface)? Specifically using acoustic modems, see here.

4 0

It's probably much easier to find a company selling acoustic modems than to try and create your own...very challenging environment to work within. I have about 10 years working with underwater acoustics (as a non-engineer). 

To give you the best advice, I need a bit more information about your specific application. Underwater communication typically relies on acoustic modems because radio waves don't travel well through water. For longer distances, a hybrid solution is often used. This could involve an acoustic modem transmitting data from your underwater sensor to a surface buoy or platform. The buoy could then relay the data to shore using traditional radio communication, such as 900MHz radios (even commercially available options like Ubiquiti radios, depending on distance and line-of-sight). To help me understand your needs and recommend the right approach, could you tell me more about:

Data Rate Requirements: How much data do you need to transmit, and how often?

Range: What is the vertical distance between your sensor and the surface receiver?

Environment: Are you working in shallow or deep water? What is the water temperature, salinity, and expected noise level?

Power Budget: How much power is available for the underwater sensor and modem?

Cost: What is your budget for the acoustic modem system?

Integration: How will you integrate the modem with your sensor and surface receiving system?

Real-time or Delayed: Do you need the data in real-time, or can it be stored and transmitted later?

Steve and I are looking to develop a low-cost benthic drift camera with a live video feed. Our hope is to use an acoustic modem to give us a low quality feed for navigation / hazard avoidance.  This could be as simple as a small black and white image refreshed every 1 sec. What we need to know from the system is are we at the bottom? and are we about to hit an obstacle? 

Data Rate Requirements: A very low quality (360p) video stream - could be black & white and low fps to reduce bit rate requirements (hopefully 1kbps)

Range: At least depth 500m but ideally down to 2,000m (deployed below a vessel from a tether)

Environment: Deep water temperature range down to 0C. Open ocean salinity levels (33-35 psu). Limited noise apart from deployment vessel (engine & echosounder)

Power Budget: Transmitter must be powered by a battery capable of continuous working for at least 1 hour.

Cost: Ideally under £1000 (GBP) for the transmitter & receiver.

Integration: Don't know. We are hoping to plug into an arduino or equivalent.

Real-time or Delayed: We need real time transmission with very limited lag for slow moving obstacle avoidance.

It's a big ask, but any pointers would be very welcome

See full post
Link

Open Sustainable Technology directory

May be of interest to WILDLABS in terms of open source solutions, conservation tech, and learning about similar efforts. The founder of Open Sustainable Technology wrote this post about their work: https://opensource.net/closing-the-gap-accelerating-environmental-open-...

0
discussion

Automatic extraction of temperature/moon phase from camera trap video

Hey everyone, I'm currently trying to automate the annotation process for some camera trap videos by extracting metadata from the files (mp4 format). I've been tasked to try...

10 1

I just noticed that TrapTagger has integrated AI reading of timestamps for videos. I haven't had a chance to try it out yet, but it sounds promising. 

Small update. I uploaded >1000 videos from a spypoint flex camera and TrapTagger worked really well. Another program that I'm currently interested in is Timelapse which uses the file creation date/time. I haven't yet tried it, but it looks promising as well. 

Hi Lucy,



I now realised it is an old thread and you most likely have already found a solution long ago but this might be of interest to others.

As mentioned previously, it is definitely much better to take moon phase from the date and location. While moon phase in general is not a good proxy for illumination, that moon phase symbol on the video is even worse as it generalises the moon cycle into a few discreet categories. For calculating moon phase you can use suncalc package in R but if you want a deeper look and more detailed proxy for moonlight intensity, I wrote a paper on it 

 with accompanying R package called moonlit 

When it comes to temperature I also agree that what is recorded in the camera is often very inconsistent so unless you have multiple cameras to average your measurements you are probably better off using something like NCEP/NCAR Reanalysis (again, there is an R package for that) but if you insist on extracting temperature from the picture, I tried it using tesseract and wrote a description here: 

 

Good luck!

See full post
discussion

Can wireless charging technology be used for animal sensors?

Recently I have read many papers on animal research, and I found that one of the most difficult problems is how to solve the problem of charging sensors. After all, for many small...

15 0

Although this technology may not be mature now, and there are still many problems to be discussed and solved, I think it will have good application prospects. 

Unfortunately, I am just an undergraduate student in China, and what I can do is very limited. Maybe in the future I can also do some similar research or what I want to do. Just like you, I am studying hard now. 

Thank you very much for this forum and the professionals who responded to me. You have allowed me to see more perspectives and many things I didn't realize. Thank you very much!

See full post
Link

ClimateTriage

Discover a meaningful way to contribute to open source projects focused on climate technology and sustainability.

3
discussion

Recycled & DIY Remote Monitoring Buoy

Hello everybody, My name is Brett Smith, and I wanted share an open source remote monitoring buoy we have been working on in Seychelles as part of our company named "...

5 5

Hi Brett,this ocean lab sychelles is great to hear from you! I love your remote monitoring buoy project, especially the recycling of DFAD buoys and the design of custom-built ones that are full of interesting features. The expandability of the navigation buoys enabling it to be configured as hydrophones, cameras, or water quality monitors is a great feature. The technical marvels, such as recycling DFADs' circuits, making your own hydrophone, and designing a super-efficient system, are terrific. Moreover, it is encouraging to witness the deployment of your early system, particularly the live video and water quality data you have collected. You will definitely make a positive impact on the Wildlabs community with your knowledge of electrical engineering and software development. I care to hear more of your discoveries and any news that you will be sharing. Keep doing what you excel at!

Thank you for sharing! 

Hi Brett,

 

I am interested in developing really low cost multi channel underwater acoustic recorders.  Can you tell me a bit more about the board and stuff you were using for yours? You can get me at chris@wilcoanalytics.org.

 

Thanks,

 

Chris

See full post
discussion

Resources wanted to advise on the business model of an emerging low-cost device

Dear Wildlabers,Our task team is working very hard to design a low-cost autonomous hydrophone for research, education and citizen science. The team is technically brilliant at...

13 1

Dear @jared ,

Thank you so much for your answer. We have been focusing very deeply into the technology part of the challenge and not yet on the legal issues or product lifecycle. Definitely a lot of food for thoughts there.... thanks!

Dear @jared ,

Thank you so much for your answer. We have been focusing very deeply into the technology part of the challenge and not yet on the legal issues or product lifecycle. Definitely a lot of food for thoughts there.... thanks!

Jared's answer is excellent IMHO. One level down, you have a couple of options. One is to charge for the convenience of buying a finished, assembled, and tested product, that's for the folks that have money and no time. Those that have time but no money can build it themselves from the open source. One danger to this approach is if low cost chinese clones come on the market, whether they do probably depends a lot on the size of the market.

Another option is to withhold some 'pro/plus' feature from the open source, that depends on whether you can identify such features. A similar option is to offer features that big outfits with good funding need. For example, you could offer calibrated versions at a premium, if that's applicable and needed by such organizations. In many cases companies sell devices cheap that are usable by someone who one has one or a couple and then charge for a management system for those that have dozens or hundreds of devices, dunno whether this applies to your case. All this comes down to whether you can segment your customer base so you can create premium features for the wealthy segment.

See full post
article

August 2024 OSS paper round-up

In this instalment: HyperCoast for hyperspectral data, Fluxbot 2.0 for soil CO2 flux sensing, BirdVoxDetect for identifying bird calls, snipit for single nucleotide polymorphism visualisation and more!

2
See full post
discussion

Global model for Livestock detection in airborne imagery - Data, Applications, and Needs

Hi all,I was at a AI for ecology working group a few weeks ago and was asked to look into an airborne model for detecting livestock to assist in land-management, agriculture and...

6 3

It looks like the website has not been updated since 2022 and indeed you need to register. If you go under 'Manage Account' you may be able to register. I tried to register but I got an error saying 'Login failed! Account is not active'.  I got an email saying "Your account must be approved before being activated. Once your account has been approved, you will be notified." So I am waiting for the account to be approved...

I'll keep you posted

Hi Ben!

Great initiative! 

A review of deep learning techniques for detecting animals in aerial and satellite images

https://www.sciencedirect.com/science/article/pii/S1569843224000864#b0475 

lists a number of data sets (incl. one published by you it seems...) 

Also, @dmorris keeps a list of Terrestrial wild animal images (aerial/drone):

https://lila.science/otherdatasets#images-terrestrial-animals-drone

which seem like it might be useful for you.

https://www.sciencedirect.com/science/article/pii/S1569843224000864#b0475https://www.sciencedirect.com/science/article/pii/S1569843224000864#b0475

Hi @benweinstein !

  1. Surely a general detector might be very useful for detecting objects in aerial imagery! Maybe something similar to what MegaDetector does in camera trap images, generally detecting person, animal, and vehicle (and thus also empty photos). This could greatly improve semi-automated procedures. It could also serve as a first step for context-specific detectors or classifiers to be developed on top of this general one.
  2. There is also the WAID dataset that is readily available. Our research group could also provide images containing cows, sheep, and deer from South America.

Best

See full post
event

WCS Conservation Technology Webinar Series

Join this next edition of ConsTech webinar series, focusing on Patrol Planning with SMART and Remote Sensing at the Keo Seima Wildlife Sanctuary in Eastern Cambodia.

2 1
Is there a registration link we should fill out? The link in the original post goes right to the Teams meeting itself.
See full post