The use of microcontrollers in the swift unit suggests that the sound localization is all post processing, is there any value or intentions for real time sound localization?
Clearly that would require much more battery power but I was wondering if it was consider added value to this particular research if that were possible.
And if they do, then I have a project that can help with that :)
Raspberry Pi based ARU that would draw a bunch more power but would be able to perform local real-time elephant sound event detection to use with real time localization.
If the longer wavelengths are useful because they aren't impeded by obstacles like masses of forest leaves, then the same properties probably mean ear-flaps aren't really effective to shape/concentrate them either. (Although they could be used as a filter to block out higher frequencies.)
i'm curious as to the transducers used for the playback experiments. also wondering if there aren't things like sibilance in human speech that make communication if perhaps not understood, at least audible to them. interesting!
hcfman|2 years ago
The use of microcontrollers in the swift unit suggests that the sound localization is all post processing, is there any value or intentions for real time sound localization?
Clearly that would require much more battery power but I was wondering if it was consider added value to this particular research if that were possible.
hcfman|2 years ago
Raspberry Pi based ARU that would draw a bunch more power but would be able to perform local real-time elephant sound event detection to use with real time localization.
https://github.com/hcfman/sbts-aru
brudgers|2 years ago
The size of elephant ears would probably work better with longer wavelengths.
gus_massa|2 years ago
Yuo can have tiny ears if it would be better, but you need a throat of the corrrect size for breathing and eating.
Terr_|2 years ago
joshuaheard|2 years ago
jaystraw|2 years ago