In the publication below [1] a comparison with respect to latency is made between Teensy, Arduino Uno, xOSC, Bela, Raspberry PI and xOSC. One of the findings is that serial over USB is slower than Midi over USB while technically very similar.
The Axoloti [2] is not included in the publication but is of interest as well when building low latency audio devices.
The Axoloti is a superlative design for audio - both at software and hardware, layers. The Arduino, not so much.
The Article Author doesn't mention whether they've also abandoned the Arduino MIDI libs and written their own. Probably there's some latency up-stream that can be reduced, as well ..
Not directly related to the OP, but since it's a popular setup: the best you can hope for with a USB 2.0 FTDI or USB serial converter is USB fullspeed frame rate, or 1 kHz. So 1 millisecond in one direction.
> My first idea was to use a high-speed camera, using the video image to determine when pad is hit and the audio to detect when sound comes from the computer. However even at 120 FPS, which some modern cameras/smartphones can do, there is 8.33 ms per frame. So to find when pad was hit with higher accuracy (1ms) would require using multiple frames and interpolating the motion between them.
I wonder how accurate you could get if you hit the pad with the phone and used the phone's accelerometer to figure out when the impact occurred?
Samples rates of accelerometers are usually around 100Hz, so 10ms between each sample. Some phones might be as high as 250Hz which might start to be usable.
One challenge when using different sensors is to establish a joint timeline precisely. Might need to synchronize them with an event observed in both at the same time, like the 'clapper' used in filmmaking.
[+] [-] jarmitage|8 years ago|reply
- http://bela.io
- http://github.com/belaplatform
- Many of these papers 2015 or later feature Bela: http://instrumentslab.org/publications/
[+] [-] tyingq|8 years ago|reply
The pair of PRUs in the Beaglebone black is a large part of it.
[+] [-] kazinator|8 years ago|reply
The 80 microsecond wavelength corresponds to 12.5 kHz. That's in the range of the upper harmonics that determine the "crispness" or "air" of the tone.
Loudspeakers and filters will introduce more phase shift than this.
Oh, ... and sound travels a whopping 27 centimeters through air in 80 us.
I don't think any event in music needs to be timed to 80 us.
"Dude, did you pull down the 12.5 kHz band on the 31 band eq again? My hi-hat sounds late!"
"No way man, look: you moved your friggin' stool 27 cm from what it was before, see?"
[+] [-] _pmf_|8 years ago|reply
[+] [-] shams93|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] joren-|8 years ago|reply
[1] http://www.eecs.qmul.ac.uk/~andrewm/mcpherson_nime2016.pdf
[2] http://www.axoloti.com/
[+] [-] fit2rule|8 years ago|reply
The Article Author doesn't mention whether they've also abandoned the Arduino MIDI libs and written their own. Probably there's some latency up-stream that can be reduced, as well ..
[+] [-] jononor|8 years ago|reply
[+] [-] bjt2n3904|8 years ago|reply
[+] [-] revelation|8 years ago|reply
[+] [-] voltagex_|8 years ago|reply
[+] [-] tzs|8 years ago|reply
I wonder how accurate you could get if you hit the pad with the phone and used the phone's accelerometer to figure out when the impact occurred?
[+] [-] jononor|8 years ago|reply
[+] [-] MrZeus|8 years ago|reply
Yes. Yes, they did.
- http://asio4all.com/
[+] [-] unknown|8 years ago|reply
[deleted]