top | item 17288454

Back to the Future of Handwriting Recognition

142 points| jabagawee | 7 years ago |jackschaedler.github.io

37 comments

order

rayiner|7 years ago

Handwriting recognition is a great example of technology whose development seems to have plateaued before it became "good enough." Stroke-based recognition has been in development for half a century now, but my iPad Pro still makes errors at least a couple of times per line, which is enough to make it pretty much useless unless you're writing only for your own later consumption. That and voice recognition. It's shocking how bad Android and iOS still are at that, even after decades of work on voice recognition technology.

pipio21|7 years ago

>I think it’s worth asking why anyone in their right mind should care about mid-century handwriting recognition algorithms in 2016.

Lots of people care, specially in Asia(Chinese and Japanese). It is just that the problem is incredible hard.

We put 5 very smart people working for a year on that, and it was totally impossible meeting people's expectations, specially people like doctors taking notes fast(and ugly).

We thought that the market was in creating mindmaps or something instead as people could write slower and better.

But people write a double u and expect the computer to see an "m". With deep learning is possible but extremely flimsy.

snowwrestler|7 years ago

This a cool exploration of technology, and I don't want to take away from that.

> The program was efficient enough to run in real-time on a IBM System/360 computer, and robust enough to properly identify 90 percent of the symbols drawn by first-time users.

I just want to point out that 90% accuracy is, from a user's point of view, awful handwriting recognition performance. It means you will be correcting on average about 10 words per paragraph! Even 99% percent accuracy is not nearly good enough to give people a sense that the computer is good at handwriting recognition.

I also want to point out the difficulty and danger in interpreting strokes when doing handwriting recognition.

In the last demo box, try writing a capital Y without lifting the pen. You'll have to go "up and down" one or both upper branches. Because of this, the recognizer will call it a K, A, or N even though it is obviously a Y when you're done.

This demo is constrained to only using one stroke per letter, but systems that permit multiple strokes still get into trouble when the strokes don't match what they are expecting--for example if you draw an X using 4 individual strokes outward from a central point.

This also happens with words. In Microsoft's handwriting recognition in Office in the early 2000s, writing the letters of a word out of order completely borked the recognition. For example writing "xample" and then going back and adding an "e" at the beginning would not produce a recognized word of "example."

My point with all of this is that there is a reason you probably don't do all your computing with natural handwriting. It's a surprisingly difficult problem. Users do not expect it to matter how they form letters and words on the page. And they have very low tolerance for correcting computer mistakes.

defgeneric|7 years ago

> This demo is constrained to only using one stroke per letter, but systems that permit multiple strokes still get into trouble when the strokes don't match what they are expecting--for example if you draw an X using 4 individual strokes outward from a central point.

Arguably, an X drawn this way should NOT be recognized as an X--that's not how an X is spelled.

If the task is communicating with the computer, then recognition of the gesture is a valid approach. Just as there are conventions regarding the spelling of words, there are conventions involved in the formation of letters. Why not use them? It would even seem incorrect to leave these out.

scotu|7 years ago

for many of the examples you gave, I think that could be solved through an autocomplete style correction that sure, it's not perfect, but it seems good enough for smartphone users: xample is not a word, so it's probably a typo, so it's probably example...

you could also keep multiple interpretation of a word pending (and a text search for all of them would take you there) and eventually ask the user to disambiguate if the user wants to. I assume this would be an acceptable solution for non dictionary words too...

unwind|7 years ago

I just want to point out that 90% accuracy is, from a user's point of view, awful handwriting recognition performance. It means you will be correcting on average about 10 words per paragraph!

Wait, what? Doesn't that imply that a paragraph needs to have 100 words in it, in order for 10 of them to be recognized wrong at 90% success rate? That seems super-long, anyway.

My stats are really rusty, perhaps that's just one of those unintuitive cases that confuse people like me.

taeric|7 years ago

I don't really disagree, but I think you overstate it, to an extent. For most people, simple 99% accuracy of their input on their phone's system of capture is probably overstating it. There is a reason people have the clever footers "written on phone."

That is to say, people have a higher tolerance for things that are within expected norms of their environment. Ideally, we want no corrections. But, having to do them constantly for a time will quickly desensitize people to this. (And yes, this is currently just an assertion of mine, I don't have data backing it. Just some anecdotes.)

pkaye|7 years ago

This is kind of interesting. I had a through about how to approach the handwriting recognition problem a few years back and surprisingly I though of this curvature based approach also. I never implemented it (too lazy to try...) but its cool to see how well something like that might work.

blattimwind|7 years ago

The linked demo is by far the most impressive thing I've seen all week. I wish a certain Microsoft chart editor was as easy and unfinicky to use as this demo from 1966 (52 years ago), and that's still one of the better editors out there.

taneq|7 years ago

Comparing this with the Graffiti system on my old (2000-ish) Palm Pilot, this is somewhat more reliable even on a first attempt than that was after I'd made a concerted effort to learn it. Very cool!

Edit: I think where the Afterword says "inputting text with a stylus is likely slower than touch typing", they're forgetting that we still don't have a really acceptable way of inputting text on mobile devices. Swype and its ilk are close, but still hamfisted at times.

symlock|7 years ago

I missed it the first time, but the article has linked source code (github.com/jackschaedler/handwriting-recognition) for all the D3.js demos that is worth a read.

interfixus|7 years ago

All this constant talk of AI and singularities and whatnot.

Reality check: Our machines do not yet accurately manage simple reading tasks.

watmough|7 years ago

I did something like this in Visual Basic and submitted it to PC PLUS in the UK, back in the early 90's.

It was (yay!) published as recognit.bas (VB) and I'd be really happy if someone still has a copy.

It recognized just numbers but the basis of operation was similar to the linked article.

EliasY|7 years ago

I wonder if it was possible to use Hinton's idea of local features (where a 3 is recognized as an E in a 180 rotation map and a W in 90 deg. rotation map) to make the recognition partially rotation invariant....

singularity2001|7 years ago

so much time spent on manual feature engineering which could be implicitly picked up by RNNs.