(no title)
thelastquestion | 3 years ago
The person types either by using eye tracking to move the cursor and clicking with the BCI device, or with a custom interface that cycles through characters one at a time and using only the BCI device to say yes to that character.
So the decoding of intent isn’t at the level that your thought experiment is concerned about, but in general, you definitely could implement something that decodes an initial intent before subsequent recording (e.g., think about waking up the device). Trivially for Synchron’s device this could be X number of consecutive movement intents. For intracortical BCI devices with single neuron resolution, you could imagine more precise neural activity correlated with the intent to begin decoding.
ad404b8a372f2b9|3 years ago
HarryHirsch|3 years ago
It's surprising that no one has used a camera plus ML whizzo stuff including predictive text to speed up the process.
thelastquestion|3 years ago
cma|3 years ago
NotPavlovsDog|3 years ago
I am not listing the manufacturers since most of them are also involved in either/or/and military and marketing applications and I am done supporting surveillance and murder capitalism.
But the eye movement interfacing tech is there and becoming more and more wide-spread. The major players have pilot studies at hospitals and r&d medical facilities across the world.
With the implant the concept is that with further development, it can be used for connection to locomotion etc. The proposed future potential of direct interfacing is larger, so to say.
An exoskeleton with direct input from a fully paralyzed wearer can significantly contribute to rehab, just one scenario.