It sometimes happens that programmers lose the ability to use their hands due to accidents/illness. Has anyone here used / seen an efficient setup for programming (i.e. entering code in various languages) that works with voice recognition, perhaps combined with eye movement? Please share info (hardware/software/effective "typing" speed). I'm sure it can be implemented better than using standard voice recognition software and text editors.
[+] [-] srik|11 years ago|reply
Not entirely a no-hands approach, but I can't seem to remember the developer who made his first app in the hospital by typing with one/two fingers. He wrote a blogpost about his process that was really inspiring.
[+] [-] mistercow|11 years ago|reply
[+] [-] wcbeard10|11 years ago|reply
Works well for me, but with some friction from reliance on a Windows VM.
[+] [-] luckystarr|11 years ago|reply
It's been developed for people only having a "one dimensional" possibility for doing inputs, i.e. only can move an eye or one muscle.
Given a custom dictionary you can write quite fast, though I don't know how practical this would be for programming.
[+] [-] Igglyboo|11 years ago|reply
Intellisense in most modern IDEs already does most of the custom dictionary work.
[+] [-] JoshTriplett|11 years ago|reply
[+] [-] tluyben2|11 years ago|reply
[+] [-] delgaudm|11 years ago|reply
[+] [-] ilanco|11 years ago|reply
[+] [-] jkot|11 years ago|reply
[+] [-] Igglyboo|11 years ago|reply
[+] [-] smeyer|11 years ago|reply
[+] [-] lordbusiness|11 years ago|reply
[+] [-] melling|11 years ago|reply
[+] [-] Igglyboo|11 years ago|reply
[+] [-] sesm|11 years ago|reply
[+] [-] SwellJoe|11 years ago|reply
[+] [-] Estragon|11 years ago|reply
http://raphaelhertzog.com/2011/06/24/people-behin-debian-sam...
[+] [-] JoshTriplett|11 years ago|reply
Useful packages include brltty (for Braille terminals), emacspeak, and orca.
[+] [-] panglott|11 years ago|reply
[+] [-] vemv|11 years ago|reply
A sophisticated multi-monitor setup would help.
[+] [-] melling|11 years ago|reply
[+] [-] pakled_engineer|11 years ago|reply
[+] [-] melling|11 years ago|reply
http://thespanishsite.com/public_html/org/ergo/programming_b...
I haven't gotten around to setting up the Window's VM on my Mac and trying.
[+] [-] joshuapants|11 years ago|reply
[+] [-] mafuyu|11 years ago|reply
[+] [-] erikb|11 years ago|reply
[+] [-] Datsundere|11 years ago|reply
[+] [-] nailer|11 years ago|reply
Code is really just a seraliazed AST. That tree structure should be modifiable with gestures / voice as much as any other tree structure.
[+] [-] unwind|11 years ago|reply
[+] [-] codecamper|11 years ago|reply
Intellij has this feature called "smart complete".. it just types whatever is required at that moment. Must work on this sort of principle.
Seems that you could put voice input into some sort of low-gear mode of just one letter at a time. And that could work well inside IntelliJ with its auto-complete / code gen / refactoring sort of commands.
[+] [-] 0xdeadbeefbabe|11 years ago|reply
[+] [-] TeMPOraL|11 years ago|reply
[+] [-] erikb|11 years ago|reply