top | item 9340238

Ask HN: Anyone using a “no hands” setup for programming?

78 points| lazyjones | 11 years ago | reply

It sometimes happens that programmers lose the ability to use their hands due to accidents/illness. Has anyone here used / seen an efficient setup for programming (i.e. entering code in various languages) that works with voice recognition, perhaps combined with eye movement? Please share info (hardware/software/effective "typing" speed). I'm sure it can be implemented better than using standard voice recognition software and text editors.

46 comments

order
[+] srik|11 years ago|reply
[This talk by Tavis Rudd](https://www.youtube.com/watch?v=8SkdfdXWYaI) about a system he used when his hands were afflicted with RSI is pretty interesting.

Not entirely a no-hands approach, but I can't seem to remember the developer who made his first app in the hospital by typing with one/two fingers. He wrote a blogpost about his process that was really inspiring.

[+] mistercow|11 years ago|reply
Ever since I first saw that talk, I've been checking his github page to see if he's pushed his code yet. Not that I'm judging; if I had a dime for every project I totally intended to push to github "once I clean up the duct tape", and then didn't, I'd probably have, like, a dollar.
[+] luckystarr|11 years ago|reply
Try Dasher http://www.inference.phy.cam.ac.uk/dasher/

It's been developed for people only having a "one dimensional" possibility for doing inputs, i.e. only can move an eye or one muscle.

Given a custom dictionary you can write quite fast, though I don't know how practical this would be for programming.

[+] Igglyboo|11 years ago|reply
It would probably quite fast for everything other than writing out strings or naming variables/methods etc.

Intellisense in most modern IDEs already does most of the custom dictionary work.

[+] JoshTriplett|11 years ago|reply
It also has the advantage of using the same interface to enter text character-by-character and to accept predictions; predicted next text becomes easier to enter with a wider area to hit, while unlikely next text can still be entered.
[+] tluyben2|11 years ago|reply
Thanks for posting this one ; I saw it a while ago and some more polished version as well. Seems like a good idea to play around with for touch screen.
[+] jkot|11 years ago|reply
My relative has no hands since birth, he types on keyboard with his feet. I think most people could learn it after some training.
[+] Igglyboo|11 years ago|reply
A kid in my CS program at uni only has one hand and only has 3 fingers on that hand, he's a great programmer and only slightly slower than most people. Most of my time spent programming is actually reading and thinking, not typing, so this comes as little surprise.
[+] smeyer|11 years ago|reply
A friend of mine in college typed primarily with her feet (at about 30 words per minute, if I recall correctly.)
[+] lordbusiness|11 years ago|reply
http://www.looknohands.me/ This New Zealand designer has a great setup that works for her.
[+] melling|11 years ago|reply
She's a web designer. For programming, you'll be using a much different interface. Although, it'd be cool if someone built an IDE that had more gesture support.
[+] Igglyboo|11 years ago|reply
Slightly Offtopic: Does anyone use/know of a system for "no eyes" (blind) programming? Does anyone know any blind developers?
[+] JoshTriplett|11 years ago|reply
The Debian project has multiple blind developers. One serves on the Debian Technical Committee.

Useful packages include brltty (for Braille terminals), emacspeak, and orca.

[+] panglott|11 years ago|reply
I think blind programmers mostly just use ASCII braille, with standard accessibility equipment like braillers, braille notetakers, and screenreaders.
[+] vemv|11 years ago|reply
I'd hire a secretary and dictate whatever I want to do (not limited to typing: window switching, etc).

A sophisticated multi-monitor setup would help.

[+] melling|11 years ago|reply
Not practical for most people, of course.
[+] joshuapants|11 years ago|reply
There are some solutions if you can use one hand. Matias makes a (really super expensive) one-hand keyboard, and there's also software that allows you to mirror the keyboard in halves with a hotkey. I'm sure there are other options in that realm.
[+] nailer|11 years ago|reply
Not an answer, but a thought:

Code is really just a seraliazed AST. That tree structure should be modifiable with gestures / voice as much as any other tree structure.

[+] unwind|11 years ago|reply
What are some examples of "any other tree structure" that are readily modifiable by gestures/voice, then? That simply didn't sound very familiar to me, so it made me curious what you're thinking of.
[+] codecamper|11 years ago|reply
Seems like a good idea. Only certain things are type-able at any one time.

Intellij has this feature called "smart complete".. it just types whatever is required at that moment. Must work on this sort of principle.

Seems that you could put voice input into some sort of low-gear mode of just one letter at a time. And that could work well inside IntelliJ with its auto-complete / code gen / refactoring sort of commands.

[+] 0xdeadbeefbabe|11 years ago|reply
Good thought. For example, if I were telling another human what to type I would start saying things like, "the child of this if statement" instead of "open brace tab tab" or "in the parent function change parameter a to b" instead of "up up up up up up..."
[+] TeMPOraL|11 years ago|reply
You're not really exposed to the tree structure though, unless you're writing in Lisp.
[+] erikb|11 years ago|reply
It seems as the requirement for no hands coding is quite high but there is only one dude doing that, and only on windows (or at least needing a VM)?