(no title)
asgraham | 2 months ago
Have you played at all with thought-to-voice? Intuitively I’d think EEG readout would be more reliable for spoken rather than typed words, especially if you’re not controlling for keyboard fluency.
asgraham | 2 months ago
Have you played at all with thought-to-voice? Intuitively I’d think EEG readout would be more reliable for spoken rather than typed words, especially if you’re not controlling for keyboard fluency.
clemvonstengel|2 months ago
It does generalize between typed and spoken, i.e. it does much better on spoken decoding if we've also trained on the typing data, which is what we were hoping to see.
Terretta|2 months ago
Both of these modes are incredibly slow thinking. Conciously shifting from thinking in concepts to thinking in words is like slamming on brakes for a school zone on an autobahn.
I've gathered most people think in words they can "hear in their head", most people can "picture a red triangle" and literally see one, and so on. Many folks who are multi-lingual say they think in a language, or dream in that language, and know which one it is.
Meanwhile, some people think less verbally or less visually, perhaps not verbally or visually at all, and there is no language (words).
A blog post shared here last month discussed a person trying to access this conceptual mode, which he thinks is like "shower thoughts" or physicists solving things in their heads while staring into space, except "under executive function". He described most of his thoughts as words he can hear in his head, with these concepts more like vectors. I agree with that characterization.
I'm curious what % of folks you've scanned may be in this non-word mode, or if the text and voice requirement forces everyone into words.
asgraham|2 months ago