Group

Software Development / Feed

This group is for anyone interested in applying software to conservation and wildlife research. Whether you're a developer eager to contribute to conservation or a newbie with valuable data and ideas but limited software experience, this group connects people with diverse expertise. It provides a space for asking questions, sharing resources, and staying informed about new technologies and best practices.

event

AniMove Summer School 2025

AniMove is a collective of international researchers with extensive experience in the topics of animal movement analysis, remote sensing and conservation. The AniMove Workshop is a two-week intensive training course for...

0
See full post
discussion

Software QA Topics

Hi everyone,What should we share or demo about Software Quality Assurance? Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to...

1 3

Hi everyone,

What should we share or demo about Software Quality Assurance? 

Alex Saunders and I, the two Software QA people at Wildlife Protection Solutions (WPS) are going to do a community call to knowledge share on software testing and test automation in the 3rd or 4th week of January.

We've listed a few QA topics that we could talk about in this 1-2 minute poll here basketball stars and would like your feedback on topic priority.

Thanks for your feedback and we look forward to connecting! We'll also post when we have an exact date and time pinned down.

Sounds like a great initiative—looking forward to it! I’d love to hear more about your real-world test automation setup, especially any tools or frameworks you’ve found effective at WPS. It’d also be helpful to see how QA fits into your dev workflow and any challenges you’ve faced specific to conservation tech. I just filled out the poll and can’t wait to see what topics get chosen. Thanks, Alex and team, for organizing this!

See full post
discussion

Prototype for exploring camera trap data

Hi, I would like to start a discussion around a prototype that aims at improving consumption of camera trap data. How is it different (in theory) from existing tools? I...

36 4
  • Regarding species richness, isn't it covered by the activity tab where we can see the entire list of detected species? What else do you think would be helpful? I can imagine a better UI focusing on species with more information for each species and easier search but the raw info would be roughly the same.

I think the app sort of covers this, using the pie chart layover on leaflet in the activity tab. However, it would be nice to have a more direct way of visualizing the species richness (i.e. scale the radii of the circle markers with the number of species detected). In addition to this you may want to think about visualizing simple diversity indices (there's alpha diversity which captures the species richness at each trap, gamma diversity to summarize the total richness in the area and beta diversity to assess how different species compositions are between traps). Note: I do not use diversity indices often enough to provide more specific guidance. @ollie_wearn is this what you were referring to?

 

  • Regarding occupancy, that makes sense to me. Only challenge is I'm not using python or R at the moment because and we use javascript for easier user interfaces with web technologies. It's still possible to add python support but I'd delay as much as possible to keep things simple. Anyway, it's a technical challenge that I'm pretty sure I can solve.

I can help you with setting up the models in R, Python or Stan/ C++. But I have no idea on how to overcome the technical challenge that you are referring to. I agree with @ollie_wearn that allowing variable selection and model building would take this to far. One thing I would suggest is to allow users to fit a spatially-explicit version of the basic occupancy model (mind that these can be slow). This type of model leverages the correlations in species detections between trap locations to estimate differences in occupancy across the study area rather than just estimating a single occupancy for the entire area. 

 

  • What about relative abundance? My readings always mention occupancy, species richness, abundance and density. I've read and ask enough about density to know that it's hard for unmarked animals and might not be necessary. If I understood correctly, relative abundance can give similar insights and as a conservationist, you probably want to know the trend of relative abundance over time.

Yes, I would leave users with the following options: a UI-switch that let's them pick either: 

  1. the number of observations
  2. the number of individuals detected
  3. the relative abundance index or RAI (based on 1.)
  4. the RAI (based on 2.)

to visualize on the leaflet map - and barchart on the side in the activity tab.

 

Regarding density: you could add a tab that calculates density using the Random Encounter Model (REM), which is often used when estimating density of unmarked animals without info on recaptures.

 

Regarding activity patterns: I would also add a tab were users can visualize either diel or annual activity cycles (often called activity patterns) computed through the activity R-package (or integrate this in an existing tab). And maybe even allow computing overlap in daily activity cycles among selected species.

 

If you manage to include all of these, then I think your app covers the 90% of use cases.

Some other features worth discussing:

  1. merging/ comparing multiple projects
  2. in line with calculating overlap between activity cycles, allow computing a spatial overlap between two species or even a spatio-temporal overlap 

 

@Jeremy_ For the Python implementation of basic occupancy models (as suggested by @ollie_wearn ), please refer to these two projects:

I second @martijnB suggestion to use spatially explicit occupancy models (as implemented in R, e.g., https://doserlab.com/files/spoccupancy-web/). However, this would need to be added to both of the aforementioned Python projects.

Lively and informative discussion, I would very much like to contribute if there is some active development work with regards to this. 
I have recent experience with using Model Context Protocol (MCP) to integrate various tools & data repositories with LLMs like Claude. I believe this could be a good idea/path whereby we can do the following:
1. use the images & labels along with any meta-data, chunk/index/store it in vector db
2. integrate with existing data sources available by exposing the data through MCP server

3. Use MCP friendly LLM clients (like Claude) to query, visualize and do other open-ended things leveraging the power of LLM and camera trap data from various sources. 
 

Regards,

Ajay

See full post
discussion

The Boring Fund 2024 - MoveApps

 We are honored to be among the winners of The Boring Fund 2024! Thank you WILDLABS and Arm for selecting our project.MoveApps is a free no-code analysis platform for...

7 8

We are pleased to inform you that we have now finalized point 2 and 3. Here some details of the update:

  • App browser improvements:
    • Improved overview and search: we have added a description of each category and
      the search and filtering options are improved.
    • Searching for Apps within a Workflow: we have added the option to include Apps
      that are not compatible with the IO type, making it easier to decide if a translator
      App is needed to include one of the incompatible Apps.

       

  • Public Workflows improvements:
    • Improved overview: the public Workflows are now organized by categories which
      can be also used for filtering.
    • More information: the details overview contains now the list of Apps included in
      each Workflow.
    • Sharing Workflows: when creating a public Workflow you will have to select one
      or more existing categories, but you can also always request a new category.

Go and check it out in MoveApps!

We are please to inform that we have implemented the point 1 and 4 and with this have finalized the project. The latest improvements:

  • Improvement in findability of help documentation: we have started to populate the platform with links (question mark icon) to the relevant
    sections of the user manual.
  • The log files of each App can now be downloaded and when an error occurs directly be sent to MoveApps support. Find more details here.

Again a great thank you for giving us the opportunity to implement these changes. We think they have greatly improved the user friendliness of MoveApps

 

 

See full post
discussion

Prospective NSF INTERN 

Hello all,My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning...

1 2

My name is Frank Short and I am a PhD Candidate at Boston University in Biological Anthropology. I am currently doing fieldwork in Indonesia using machine-learning powered passive acoustic monitoring focusing on wild Bornean orangutans (and other primates). I am reaching out because as a student with a National Science Foundation Graduate Research Fellowship, I am eligible to take advantage of the NSF INTERN program which supports students to engage in non-academic internships through covering a stipend and other expenses, with the only caveat being that the internship must be in-person and not remote. I was wondering if any organizations in conservation technology would be interested in a full-time intern that would be coming in with their own funding? 

In addition to experience with machine learning and acoustics through training a convolutional neural network for my research, I also have worked with GIS, remote sensing, and animal movement data through other projects. Further, I have experience in community outreach both in and outside of academic settings, as I previously worked for the Essex County Department of Parks and Recreation in New Jersey for 3 years where I created interpretive signs, exhibits, newsletters, brochures, and social media posts. Now while doing my fieldwork in Indonesia, I have led hands-on trainings in passive acoustic monitoring placement and analysis as well as given talks and presentations at local high schools and universities. 

I would love to be able to use this opportunity (while the funding still exists, which is uncertain moving forward due to the current political climate in the US) to exercise and develop my skills at a non-academic institution in the conservation technology sphere! If anyone has any suggestions or is part of an organization that would be interested in having me as an intern, please contact me here or via my email: fshort@bu.edu geometry dash. Thank you!

Hi Frank, your work sounds incredibly valuable and well-aligned with current needs in conservation tech. With your strong background in machine learning, acoustics, GIS, and outreach, you’d be an asset to many organizations. I’d recommend looking into groups like Rainforest Connection, Wildlife Acoustics, or the Conservation Tech Directory (by WILDLABS)—they often work on acoustic monitoring and might be open to in-person internships, especially with funding already in place. Best of luck finding the right match—your initiative is impressive!

See full post
discussion

No-code custom AI for camera trap images!

Thanks to our WILDLABS award, we're excited to announce that Zamba Cloud has expanded beyond video to now support camera trap images! This new functionality...

3 3

When you process videos, do you not first break them down into a sequence of images and then process the images ? I'm confused as to the distinction between the processing videos versus images here.

We do, but the way the models handle the images differs depending on whether they're coming from videos or static images. A quick example: videos provide movement information, which can a way of distinguishing between species. We use an implementation of SlowFast for one of our video models that attempts to extract temporal information at different frequencies. If the model has some concept of "these images are time sequenced" it can extract that movement information, whereas if it's a straight image model, that concept doesn't have a place to live. But a straight image model can use more of its capacity for learning e.g. fur patterns, so it can perform better on single images. We did some experimentation along these lines and did find that models trained specifically for images outperformed video models run on single images.

Hope that helps clear up the confusion. Happy to go on and on (and on)...

See full post
discussion

AI/ML opportunities

Hello,I’ll be graduating with a masters in AI and machine learning in August, and I’m currently doing my industry project with Aurizn, where I’m segmenting high resolution...

1 0

Ritika, 

All the best! I hope someone provides a more substantive answer! 

I have also graduated with masters in AI and ML recently. Difference being I am at the end of my IT career. I am looking for a career switch to biodiversity, wildlife conservation, sustainability or climate change. 

I am trying to do my best to do modern job search. Just warming up to it. LinkedIn, posting relevant posts, being consistent. Virtual networking. In person networking. Being a soon to be fresh graduate, you have access to a huge student networking and academic circle. Keep hitting them consistently and I am sure you will find something. 

Share the good news when it happens. :)

See full post
discussion

scikit-maad community

 We, @jsulloa and @Sylvain_H, are the main contributors of scikit-maad, an open source Python package dedicated to the quantitative analysis of environmental audio recordings...

20 8

Hello!

I could be wrong, but from looking at the source code of the ACI calculation on the scikit-maad it appears that it autosets j = length of signal. If you want to break it down to e.g. 5 windows you can break the spectrogram into 5 chunks along time and then sum the ACI in each window. This gives a more similar result to the results you got using the other methods. What did you set as nbwindows for seewave?

 

s, fs = maad.sound.load('test_data_ACI/LINE_2003-10-30_20_00_34.wav')

Sxx, tn, fn, ext = maad.sound.spectrogram (s, fs, mode='amplitude')

full_time = Sxx.shape[1] # number of time samples in spectrogram

j_count = 5 # number of chunks we want to break it into

window_size = np.floor(full_time/j_count) # total time divided by number of chunks = number of time samples per chunk

ACI_tot = 0

for i in range(j_count):

_, _ , ACI = maad.features.acoustic_complexity_index(Sxx[:,int(i*window_size):int(i*window_size+window_size)])

ACI_tot = ACI_tot + int(ACI)

This gives ACI_tot = 1516

Hi all, 

I have recently been utilising the ROI module of scikit-maad to locate non-biophonic sounds across low-sample rate Arctic hydrophone datasets and have a query about how ROI centroids are calculated...

Looking at the source code for the function "centroid_features" in .\maad\features\shape.py, I can see that the function calls "center_of_mass" from .\scipy\ndimage\_measurements.py. This to me suggests that the centroid should be placed where energy is focussed, i.e. the middle of the acoustic signature captured by the masking stage of ROI computations.

I'm a bit confused as to why the centroids I have appear to be more or less placed in the centre of the computed ROIs, regardless of the energy distribution within the ROI. The sounds I'm capturing have energy focussed towards lower frequencies of the ROI bands, so I would have expected the centroid to shift downwards as well.

Has anyone modified how ROI centroids are defined in their work? I'd be quite interested to set up centroids to signify where the peak energy level lies in the ROI, but I'm not quite sure how to do this cleanly.

Any advice would be greatly appreciated, thanks!

Kind regards,

Jonathan

 

New stable release : v1.5.1

We are pleased to announce the latest release with several important enhancement, fixes and documentation improvements to ensure compatibility with the latest versions of SciPy and scikit-image as well as with Xeno-Canto.

In this new version, 2 new alpha indices are implemented, aROI and nROI, the latter being a good proxy of the average species richness per 1 min soundscape.



 

See full post
discussion

Field-Ready Bioacoustics System in Field Testing 

Hi all — I’m Travis, an automation engineer and conservation tech builder currently testing a system I’ve developed called Orpheus: a fully integrated, field-...

4 2

Hi Carly,

Thanks so much for your thoughtful message—and for introducing me to Freaklabs! BoomBox looks awesome, and it’s exciting to see how closely our goals align. There’s definitely potential for collaboration, and I’d be happy to chat more. Their system is super efficient and I think both of our systems have a place in this space. 

Affordability and reliability were key considerations when I started building Orpheus. I wanted to create something rugged enough to survive in the field year-round while still being accessible for conservationists with limited budgets. The full-featured unit is €1500, and the basic model is €800. That pricing reflects both the hardware and the considerable time I’ve spent writing and refining the system—it’s all about balancing performance, durability, and keeping it sustainable for the long term.

Even the base unit is more than just a playback device. It logs every playback event, duration, and species, with enough onboard storage for two years of data, and it automatically converts the logs to line protocol for easy integration into platforms like InfluxDB.

On top of that, Orpheus actively logs and graphs temperature, humidity, atmospheric pressure, and battery voltage. During deep sleep, it interpolates the environmental data to preserve meaningful trends without wasting energy. You can view any of these on it's 5" touch screen or view it in the cross-platform app that will support both Android and IOS once I'm done programming it.

As for audio specs:

  • Recording is supported up to 96kHz,
  • Playback is full 24-bit, both MP3 and WAV formats
  • The system currently supports recording audio clips, reviewing them, and even adding those clips directly to playlists on the device.

That said, for bat research, I know ultrasonic capability is essential. While the current hardware doesn’t capture over 100kHz, I’ve already done the research and identified alternative audio interfaces that would support that range. If that’s a need researchers are interested in, I’d be open to building out a dedicated version to meet those requirements.

Power-wise, it runs indefinitely on solar, even under partly cloudy conditions. It uses a LiFePO₄ battery, and depending on usage, it can operate for up to two weeks on battery alone. It also supports external power from 12V or 24V systems, and solar input from 12V to 70V, so it’s pretty adaptable to various field setups. it also can operate from -5 to 70C (still testing that), but the hardware should be capable according to specs. Your correct though in places like the rain forest that could be challenging and an alternative would need to be considered. 

The software is written modularly to allow for expansion based on user needs. For instance, I’ve already integrated support for a rain sensor that can pause playback if the user chooses that, and could easily include PIR, microwave, or other sensors for more specialized triggers.

Regarding durability, I’m currently testing mesh cable sheathing to deter rodents and other wildlife from chewing the wires—this was a concern raised by one of the teams using Orpheus, and I’m designing around it.

Also, Orpheus includes a seasonal scheduling engine—you can define your own seasons (like Migration, Breeding, etc.) and assign unique playback playlists to each. The device uses astronomical data (sunrise/sunset) based on your provided lat/lon and time zone, and automatically adjusts timing offsets like “1 hour before sunrise.” The goal is truly fire-and-forget deployment.

I'm open to adding any features or sensors that might be useful within reason.

I’m curious though, what specs would make a recording device for bats an indispensable tool? What features don’t already exist  on the market that should?


 

Warm regards,
Travis

I love the look of the system! We almost called our new sensor Orpheus, but decided against it as there is already a microphone named that! I'd love to see a bit more about the technical implementation! Is this running off of a CM5 or something different? 

Hi Ryan, hmm, I had no idea there was a microphone named that. I thought about how it’s used to lure birds for netting, and I like Greek Mythology. I thought it was a perfect fit, but hmm, May have to change the name. I considered using a CM, but i wanted the system to be as efficient as possible. I am using a RPI Zero 2 W with emmc. To ensure the UI stays  responsive I used some backend tricks like thread pooling. It works well and resources stay in check. The challenging part is ensuring thread handling is done gracefully and carefully to prevent race conditions. What sort of sensor have you been developing?

See full post
article

Nature Tech for Biodiversity Sector Map launched!

Carly Batist and 1 more
Conservation International is proud to announce the launch of the Nature Tech for Biodiversity Sector Map, developed in partnership with the Nature Tech Collective! 

1 0
Thanks for sharing @carlybatist  and @aliburchard !About the first point, lack of data integration and interpretation will be a bottleneck, if not death blow to the whole...
See full post
discussion

Data entry for shorebird nesting

Good afternoon community of Wildlabs, my name's Rolando and I wonder if any of you have used before any app for bird data entry on the field that can be used in a tablet or...

3 0

Hi Rolando, 

ODK or KoboToolBox are fairly easy to use and manage for creating forms that can be deployed on tablets or phones, at least based on my experience from two years ago.

Best, 

See full post
discussion

Generative AI for simulating landscapes before and after restoration activities

Hi all.Has anyone come across any generative AI tools that could be trained and used to generate photorealistic landscapes (in a web application) from habitat maps and then re-...

1 0

Yep we are working on it 

 

1/ segment 

2/remote unwanted ecosytem

3/get local potential habitat

4/generate

5/add to picture 

 

See full post
discussion

United Nations Open Source Principles

FYI, I just came across the United Nations Open Source Principles, which was recently adopted by the UN Chief Executive Board’s Digital Technology Network (DTN): It has been...

1 5

All sound, would be nice if there were only 5, though!

See full post
discussion

Quality Assurance and Testing Recap

Hello everyone,Matt Audette and I wanted to summarize our recent talk covering quality assurance and testing briefly.Just a reminder that you can find a recording of that talk,...

1 1

Great post! A very important topic. The one I would add is the importance of version control. Too much confusion comes when software versions are mismanaged. We have had plenty of issues in the past when project partners just make changes without any warning or notes! It makes testing a nightmare, wasting time and degrading quality.

See full post
discussion

What are open source solutions anyway?

What “counts” as open source? And why is that important to biodiversity conservation?It's great to be one of the three conveners of the WILDLABS Open Source...

33 12

Open source technologies are a game-changer for biodiversity conservation. They give us the freedom to use, study, modify, and share vital tools and knowledge that help advance research in meaningful ways. For conservationists, this means we can adapt technologies to meet local needs, improve existing tools, and make new innovations available to everyone—creating a more collaborative and sustainable future for our planet.

It’s exciting to see the impact of open source in conservation already, with tools like Mothbox, Fieldkit, and OpenCTD helping to drive progress. I'm curious—how do the formal definitions of open source resonate with you? How do they shape the way we approach conservation?

Also, if you're interested in how open source AI can support conservation efforts, check out this article: Open Source AI Agents: How to Use Them and Best Examples.

Can’t wait to hear your thoughts! Let's keep the conversation going.

Sorry to be a stickler on syntax when there is a richer discussion about community here - but I believe a true "open source" project is a functionally complete reference design that anyone can build upon with no strings attached. If the community isn’t provided with enough information to fully build and iterate on the design independently, then the project doesn’t truly meet the spirit of open source. 

As a developer and engineer, I’ve observed that sometimes projects crowdsource free engineering work under the guise of being "open source." While this can have benefits, it can feel like asking for a free lunch from clients and customers. 

Advanced features—like enterprise-level data management or tools for large-scale deployments—can reasonably remain proprietary to sustain the project financially. Transparency is critical here. If the foundational components aren’t fully open, it would be more accurate to describe the project as "community-driven" or "partially open." And as an engineer/developer I wouldn't be angry when I went to explore the project marked "open source" only to find that I have been lied to.

Just my two cents, and I really appreciate the thoughtful discussion here. The open source community has been a massive influence on me. Everything I do at work would not be possible without it.  In many ways, "open source" or "public domain" projects represents the true know-how of our society.

See full post
discussion

Definitions for open source software & hardware and why they're important

Recent conversations (including this previous thread) have reminded me that while I've been involved in various open source tech communities for years, I sometimes implicitly –...

2 0

Thanks for this excellent and thought-provoking post, Pen. I agree this is a binary yes/no issue, but there is a spectrum. There could also be philosophical nuances. For example, does excluding honey from a vegan diet meet the ethical criteria of veganism? It's an animal product, so yes, but beekeeping generally doesn't have the same exploitative potential as cow, sheep, or pig husbandry, right? However, looking strictly at the definition, honey is out if you want to be vegan. 

Back to software! Isn’t the main issue that companies falsely claim to offer open source hardware/software? To avoid this, do you then have to create an accreditation system? Who polices it? Is it fair? Would users care that their software has the accredited open source stamp of approval? Ultimately, we need definitions to define boundaries and speak a common language.

Thanks @VAR1 great insights! Funny you mentioned the honey thing, @hikinghack said the same in response on the GOSH forum

I think the point I'm trying to make with the vegan comparison is that while it might not be 100%, it is close enough for us to have productive conversations about it without running in circles because we can't even agree on what we are talking about. 

As for open source tech, there actually is accreditation for open source hardware (at least of a sort). The Open Source Hardware Association has a fairly mature certificate program: 

I am genuinely undecided whether such a formal accreditation system is required for open source software. My undecided-ness comes back to the food/agriculture analogy, where a similar issue exists for organic certification. Being certified organic could possibly, in some cases, be beneficial. However, certification can also be very onerous for small organic farmers who can't afford to get it. 

But before we even think about accreditation, I echo your last sentence that we need definitions to define boundaries. These definitions, as I argue in my original post above, is not only about principles and philosophy, they are also a practical necessity for enabling effective communication! 

See full post
discussion

AddaxAI - Free AI models for camera traps photos identification

AddaxAI is an application designed to streamline the work of ecologists dealing with camera trap images. It’s an AI platform that allows you to analyse images on your...

15 17

Hi Caroline @Karuu ,

The model is still in development. Unfortunately, I'm not sure how long it will take as it is not my top priority at the moment. However, you can still use EcoAssist to filter out the empty images, which is generally already a huge help. 

Would that work for the time being? 

 

See full post
discussion

Camera Trap Data Visualization Open Question

Hi there,I would like to get some feedback, insight into how practitioners manage and visualize their camera trap data.We realized that there exists already many web based...

6 0

Hey Ed! 

Great to see you here and thanks a lot for your thorough answer.
We will be checking out Trapper for sure - cc @Jeremy_ ! A standardized data exchange format like Camtrap DP makes a lot of sense and we have it in mind to build the first prototypes.
 

Our main requirements are the following:

  • Integrate with the camtrap ecosystem (via standardized data formats)
  • Make it easy to run for non technical users (most likely an Electron application that can work cross OSes).
  • Make it useful to explore camtrap data and generate reports

 

In the first prototyping stage, it is useful for us to keep it lean while keeping in mind the interface (data exchange format) so that we can move fast.


Regards,
Arthur

Quick question on this topic to take advantage of those that know a lot about it already. So once you have extracted all your camera data and are going through the AI object detection phase which identifies the animal types. What file formation that contains all of the time + location + labels in the photos data do the most people consider the most useful ? I'm imagining that it's some format that is used by the most expressive visualization software around I suppose. Is this correct ?

A quick look at the trapper format suggested to me that it's meta data from the camera traps and thus perform the AI matching phase. But it was a quick look, maybe it's something else ? Is the trapper format also for holding the labelled results ? (I might actually the asking the same question as the person that started this thread but in different words).

Another question. Right now pretty  much all camera traps trigger on either PIR sensors or small AI models. Small AI models would tend to have a limitation that they would only accurately detect animal types and recognise them at close distances where the animal is very large and I have question marks as to whether small models even in these circumstances are not going to make a lot of classification errors (I expect that they do and they are simply sorted out back at the office so to speak). PIR sensors would typically only see animals within say 6m - 10m distance. Maybe an elephant could be detected a bit further. Small animals only even closer.

But what about when camera traps can reliably see and recognise objects across a whole field, perhaps hundreds of meters?

Then in principle you don't have to deploy as many traps for a start. But I would expect you would need a different approach to how you want to report this and then visualize it as the co-ordinates of the trap itself is not going to give you much  information. We would be in a situation to potentially have much more accurate and rich biodiversity information.

Maybe it's even possible to determine to a greater degree of accuracy where several different animals from the same camera trap image are spatially located, by knowing the 3D layout of what the camera can see and the location and size of the animal.

I expect that current camera trap data formats may fall short of being able to express that information in a sufficiently useful way, considering the in principle more information available and it could be multiple co-ordinates per species for each image that needs to be registered.

I'm likely going to be confronted with this soon as the systems I build use state of the art large number of parameter models that can see species types over much greater distances. I showed in a recent discussion here, detection of a polar bear at a distance between 130-150m.

Right now I would say it's an unknown as to how much more information about species we will be able to gather with this approach as the images were not being triggered in this manner till now. Maybe it's far greater than we would expect ? We have no idea right now.

See full post
discussion

Machine learning for bird pollination syndromes

I am a PhD student working on bird pollination syndromes in South Africa and looking specifically at urbanizations effect on sunbirds and sugarbirds. By drawing from a large...

2 2

Hi @craigg, my background is machine learning and deep neural networks, and I'm also actively involved with developing global geospatial ecological models, which I believe could be very useful for your PhD studies.  

First of all to your direct challenges, I think there will be many different approaches, which could serve more or less of your interests.

As one idea that came up, I think it will be possible in the coming months, through a collaboration, to "fine-tune" a general purpose "foundation model" for ecology that I'm developing with University of Florida and Stanford University researchers.  More here.

You may also find the 1+ million plant trait inferences searchable by native plant habitats at Ecodash.ai to be useful.  A collaborator at Stanford actually is from South Africa, and I was just about to send him this e.g. https://ecodash.ai/geo/za/06/johannesburg

I'm happy to chat about this, just reach out!  I think there could also be a big publication in Nature (or something nice) by mid-2025, with dozens of researchers demonstrating a large number of applications of the general AI techniques I linked to above.

See full post