top | item 34914841

(no title)

thelastquestion | 3 years ago

My understanding from their original paper is that Synchron’s device (known as the stentrode since its electrodes are on a stent scaffold) decodes only a binary signal for this trial, that is “intent to move” or “no intent to move” in a period of time (~1 second). Their paper mentions the decoder outputting no click, short click, or long click where a short click is movement intent followed by no movement intent, and long click is something like 3 consecutive movement intents followed by no movement intent.

The person types either by using eye tracking to move the cursor and clicking with the BCI device, or with a custom interface that cycles through characters one at a time and using only the BCI device to say yes to that character.

So the decoding of intent isn’t at the level that your thought experiment is concerned about, but in general, you definitely could implement something that decodes an initial intent before subsequent recording (e.g., think about waking up the device). Trivially for Synchron’s device this could be X number of consecutive movement intents. For intracortical BCI devices with single neuron resolution, you could imagine more precise neural activity correlated with the intent to begin decoding.

discuss

order

ad404b8a372f2b9|3 years ago

God I hope someone implements a binary tree for these poor people, I can't imagine how frustrating it must be to type like that.

HarryHirsch|3 years ago

There is a simpler device, a glass plate with the alphabet held between the patient and the other person. Humans are extraordinary good at following someone's glance, and this is how a quadriplegic patient can spell out words. Franz Rosenzweig used such a thing in his last years.

It's surprising that no one has used a camera plus ML whizzo stuff including predictive text to speed up the process.

thelastquestion|3 years ago

Haha, for typical use with Synchron’s device they are using eye-tracking. The BCI-only mode is just for research purposes/baseline. It’s also just what’s in the paper, they may implement other UIs in practice.

cma|3 years ago

Couldn't something pretty close to that be done with eye movement and blinking?

NotPavlovsDog|3 years ago

There are a variety of Eye Tracking Communication Devices on the market.

I am not listing the manufacturers since most of them are also involved in either/or/and military and marketing applications and I am done supporting surveillance and murder capitalism.

But the eye movement interfacing tech is there and becoming more and more wide-spread. The major players have pilot studies at hospitals and r&d medical facilities across the world.

With the implant the concept is that with further development, it can be used for connection to locomotion etc. The proposed future potential of direct interfacing is larger, so to say.

An exoskeleton with direct input from a fully paralyzed wearer can significantly contribute to rehab, just one scenario.