EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.
Can you provide any technical detail about what is unique or novel about what your company does? Neither the wired article nor your company webpage has any useful information on what might differentiate you from the countless EMG devices out and hobbyist setups there.
Can we actually visit your office/research labs in NYC? One of my bosses loves this stuff and is always looking down the road. We would likely use it for advertising at our company or understanding how consumers interact with our products.
It's not just you. That title implies that brain signals per se aren't “useful information” to begin with. I wonder how any of us are able to read that headline at all.
If you find this topic interesting, Neuralink is hiring. We’re looking for a lot of different backgrounds across applied physics, biomedical engineering, software and hardware. Though Neuralink sounds cool - I know many people are skeptical - the reality is even cooler.
Especially if you’re great at firmware, robotics control software, or computer vision get in touch! Either through the links on the website, or there’s an email in my HN profile.
Neuralink is certainly on my radar as a company to watch but I thought it was more in an invite only Ph.D heavy theory stage than a hire and build board stage. The idea to use an acoustic radio for transmission in a mostly water media is very clever even if I worry about the safety margins of the power density involved with transmitting that much data.
I'm not sure how this would work in practice. Thoughts are incredibly noisy. Any mechanism that could filter out the noise basically can decipher intent. I'd argue intent deciphering is the actual problem trying to be solved by these devices (e.g. I wish I didn't have to type. I wish the computer just knew what I wanted to type, not that I wish the computer simply typed out what I thought). Solutions like "oh, just keep on thinking of the same thing over and over again" is highly error prone and will definitely be slower than typing. Say you wanted to type "[the quick brown brown quick the quick brown quick the brown]" a strategy of repeatedly thinking of the phrase to be typed will be error prone, regardless of any ML techniques you use, simply because it cannot be known in advance what you wanted to type unless you knew the intent.
Perhaps it'll pick it up as "the quick brown", or "quick brown the", and so forth.
---
Another problem can be illustrated below:
Say you had your brain device on now. You're ready to reply to this.
Horse poop.
Oh, I guess you read the above and now have "horse poop" typed. Well, you can just remove that ---
I don't disagree with your analysis but I think you're making the assumption that brain signals.
So instead of "that is stupid", "add comment", "you're wrong on the internet", "submit", I think we could be able to have more information about the context of the words:
(:commentary "that is stupid")
(:request-interaction "add comment) ; from which the AI figures out it is a button on the screen
(:request-input "You're wrong on the internet")
(:request-interaction "submit)
In essence, maybe it is possible to detect beyond just words and understand the context, just from the signals too.
Your horse poop example has an equivalent for voice interfaces. A naive implementation might get confused by its own output and interpret it as input. But that problem can be solved by predicting the coupling between the two channels, and subtracting that prediction from the input to get a cleaner signal.
The same procedure would be more difficult to implement for thought-based interfaces, though, because you need to predict the brain's reactions to filter the signal. Maybe you could instead use a non-verbal thought to activate the command interface, so that it doesn't get triggered accidentally.
I think it might be easier to type on a virtual keyboard with your mind than it is to dictate with thought. When you move your hand and fingers there is no conscious effort or thought, will is translated into action. Our body is an interface to the physical world, we currently make the jump form physical to digital. Advanced technology, I imagine, would simply eliminate the jump and make digital interfaces feel like physical ones.
I predict that BMIs are going to suffer from the same problem as AI, where the applications that are working in the short-term get very overestimated because they are confused with the long-term where you create a singularity. If you had a BMI that could read/write the entire brain on neuron-level resolution, you could create computer back-ups of people, and if hardware were fast enough you could create superhuman intelligence. If you just have cochlear implants and prosthetics, the best case is a world where nobody is impaired, which is good, but still very far from a singularity. The Neuralink version is that if you can do telepathy, that might be valuable in some situations, but it will probably just be like faster email until the computers become smarter than us.
I once met an ex-apple engineer who created a hat that would read your thoughts and play the song you were thinking about. It only worked for certain people and had a limited playlist to choose from but it was really cool watching your "brainwaves" on a screen and then thinking "Daft Punk Get Lucky" and having it play on the speakers.
What if you could detect the vocalization somehow instead of relying on a very noisy data source (brain signals), which I see becoming a roadblock...subvocalization would be like being able to chat without typing...you would still be interacting with a UI that will make sure you dont give out your bank card number etc.
Maybe even hold up your phone and it will beam some sort of ultrasound or laser to detect tiny movements in the larynx (I have no idea what I'm talking about) but seems like there's patent in the works by physically attaching sensors...
Back when I had to write a lot for my courses, I was wondering the same about the usefulness of EEGs [1]. At times all I wanted was to lie on my bed, point a projector to the ceiling, and write.
Alas the tech/understanding of neuroscience is just not here yet, but maybe it would be in a few decades?
There's a lot more in the way of interesting discussion in the link if you would like to read more.
EEGs (at least non surgical ones) provide only the highest level information about what's going on inside of brains. It's a bit like looking at a city as you're flying 10,000 feet above it. You can figure out when it's rush hour, but you're not going to be able to tell what the most popular restaurant is.
The most sophisticated EEG systems actually train your brain to use the EEG, no the other way around.
This is one of the first broad-audience articles I've seen that actually acknowledges neuronal firing rates as an important practical consideration.
Usually it's simplified in explanations as "binary", on or off. This isn't wrong for any instant in time (and is sometimes good enough for conceptual models), but in reality the firing rate varies as a function of the stimulus. Analog, if you like...
It wouldn't be faster: coding is not input constrained, but thinking constrained. So just being able to connect your thoughts to the input method wouldn't change much, instead the computer really needs to augment those thoughts instead. The trick would be using this tech to create a much tighter feedback loop with the computer, but just having it isn't sufficient.
My main problem with coding is not the input, it's the visual feedback. I hope that before the end of my career I get to write code which is not just a long series of ASCII text files.
The big question for me at least is, are these signals uniform or have some kind of similarities for specific concepts or actions from person to person?
If every brain has its own language, the effort is magnitude higher.
Some aspects of the signals are individual-dependent, some are consistent across individuals (for some definition of individual-dependent and consistent). I know that's vague, but it's accurate.
[+] [-] mizzao|8 years ago|reply
EEG--reading signals from the brain--is pretty hard. But EMG--capturing muscle contractions in the arm--produces relatively much cleaner data. This can then be fed into a variety of machine learning algorithms to map high-fidelity time series data to discrete signals or continuous gestures for which we have appropriate training data.
Want to find more? Come visit our offices in NYC!
[+] [-] amelius|8 years ago|reply
[+] [-] raizinho|8 years ago|reply
[+] [-] kensai|8 years ago|reply
"The innovation lies in picking up EMG more precisely—including getting signals from individual neurons—than the previously existing technology..."
EMG by definition acquires muscle activity. How do you claim that it is signals of neurons, let alone individual ones?!
(only by proxy)
[+] [-] junkcollector|8 years ago|reply
[+] [-] chaosbutters|8 years ago|reply
[+] [-] pavel_lishin|8 years ago|reply
I'd love to! But I can't figure out where you're located.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] starpilot|8 years ago|reply
[+] [-] tempodox|8 years ago|reply
[+] [-] frisco|8 years ago|reply
Especially if you’re great at firmware, robotics control software, or computer vision get in touch! Either through the links on the website, or there’s an email in my HN profile.
[+] [-] junkcollector|8 years ago|reply
[+] [-] DrScump|8 years ago|reply
[+] [-] hprotagonist|8 years ago|reply
[+] [-] monocasa|8 years ago|reply
[+] [-] _m8fo|8 years ago|reply
Perhaps it'll pick it up as "the quick brown", or "quick brown the", and so forth.
---
Another problem can be illustrated below:
Say you had your brain device on now. You're ready to reply to this.
Horse poop.
Oh, I guess you read the above and now have "horse poop" typed. Well, you can just remove that ---
"add comment"
"submit"
Too late.
[+] [-] omeid2|8 years ago|reply
So instead of "that is stupid", "add comment", "you're wrong on the internet", "submit", I think we could be able to have more information about the context of the words:
In essence, maybe it is possible to detect beyond just words and understand the context, just from the signals too.[+] [-] intended|8 years ago|reply
“Fake news” (the original term, not the co-opted one) is an example of society affecting conveniences that we have no answer for.
Messing with intent is another such area.
[+] [-] yorwba|8 years ago|reply
The same procedure would be more difficult to implement for thought-based interfaces, though, because you need to predict the brain's reactions to filter the signal. Maybe you could instead use a non-verbal thought to activate the command interface, so that it doesn't get triggered accidentally.
[+] [-] duopixel|8 years ago|reply
[+] [-] Vanit|8 years ago|reply
Why would you need to even think in terms of inputs or buttons? Why even use words when you could post or consume thoughts directly?
[+] [-] YaxelPerez|8 years ago|reply
Presumably a device like that would use some heuristics to guess the correct words, like your phone's autocorrect.
[+] [-] taneq|8 years ago|reply
OK GOOGLE BUY 500 GIANT PINK DILDOS
[+] [-] tvural|8 years ago|reply
[+] [-] Amelia45|8 years ago|reply
[deleted]
[+] [-] anonfunction|8 years ago|reply
[+] [-] lsseckman|8 years ago|reply
[+] [-] senatorobama|8 years ago|reply
[+] [-] egocodedinsol|8 years ago|reply
https://www.sciencedirect.com/science/article/pii/S095943881...
[+] [-] Santosh83|8 years ago|reply
[+] [-] westmeal|8 years ago|reply
[+] [-] pwaai|8 years ago|reply
Maybe even hold up your phone and it will beam some sort of ultrasound or laser to detect tiny movements in the larynx (I have no idea what I'm talking about) but seems like there's patent in the works by physically attaching sensors...
https://en.wikipedia.org/wiki/Subvocal_recognition
https://www.youtube.com/watch?v=xyN4ViZ21N0
[+] [-] stevenhuang|8 years ago|reply
Alas the tech/understanding of neuroscience is just not here yet, but maybe it would be in a few decades?
There's a lot more in the way of interesting discussion in the link if you would like to read more.
[1]: https://psychology.stackexchange.com/questions/9594/are-ther...
[+] [-] hackcasual|8 years ago|reply
The most sophisticated EEG systems actually train your brain to use the EEG, no the other way around.
[+] [-] lamename|8 years ago|reply
Usually it's simplified in explanations as "binary", on or off. This isn't wrong for any instant in time (and is sometimes good enough for conceptual models), but in reality the firing rate varies as a function of the stimulus. Analog, if you like...
[+] [-] dewiz|8 years ago|reply
[+] [-] seanmcdirmid|8 years ago|reply
[+] [-] simias|8 years ago|reply
[+] [-] vadimberman|8 years ago|reply
If every brain has its own language, the effort is magnitude higher.
[+] [-] egocodedinsol|8 years ago|reply
[+] [-] vt100|8 years ago|reply
[+] [-] pokemongoaway|8 years ago|reply