Subject: “Baa-bridge” – AI Sheep Stress Reduction, Seeking Genius Input!
Hey Wildlabs community,
I’m Lloyd Fulham, a French Canadian visionary in Quebec City, launching “Baa-bridge”—an AI-powered tool to decode sheep bleats, reduce stress, and boost welfare using zero-budget tech (two computers, internet, Teachable Machine).
What We’ve Done:
- Recorded 10 sheep sounds from freesound.org (~5 minutes), played loud via speaker, mic-recorded, trained in Teachable Machine. Hit 70-80% accuracy (some 90%!) classifying calm vs. stressed vs. active bleats—first smile tonight!
- Used mic hack to bypass upload errors, proving viability on modest hardware.
What We Need:
- Genius tips on refining accuracy (e.g., more farm data, epoch tweaks—currently 1,000, overkill?).
- Farmer validation—visiting Quebec sheep farms this weekend for live recordings. Suggestions for farms or protocols?
- Scaling to other species (“Woofy” for dogs next)—insights on audio AI for wildlife welfare?
Why It Matters:
This isn’t just tech—it’s love/evolution. Reducing sheep stress saves farmers time, preserves biodiversity, and deepens human-animal connection. My goal? A “MAP” (Molecular Assembly Protocol) to transform matter for joy, not pain—starting with sheep voices. create a universal starting material (akin to pluripotent stem cells) that could be instructed to become anything—say, a simple carbon-based compound or a quantum dot—we’d need a way to "signal" or "program" it. use light, magnetic fields, or other energy inputs could serve as the signaling molecules and environmental cues. The challenge would be ensuring precision, scalability, and efficiency, much like ensuring stem cells differentiate correctly without errors.
Join Me: Reply or DM—I’m on Wildlabs.net, GitHub, or email pllpost@yahoo.ca. Let’s make sheep sing (or bleat!) with joy, together!
Cheers,
Lloyd Fulham
“Baa-bridge” Creator
21 March 2025 12:18pm
I'm sure others here can comment better than I on models for classifying animal sounds, but from an ML pint of view, a key concern is getting enough data. 10 recordings does not sound like a lot (although how long are they?) and 1000 epochs does sound like a lot. It is very possible that your model is just learning to memorize the inputs, and that it will generalise poorly.
Julian Richardson