(no title)
loxias | 1 year ago
For example, maybe I'm taking notes involving words, simple math, and a diagram. Underline a key phrase and "the device" expands on the phrase in the margin. Maybe the device is diagramming, and I interrupt and correct it, crossing out some parts, and it understands and alters.
Sorry, I know this is vague, I don't know precisely what I mean, but I do think that the combination of text (via some sort of handwriting recognition), stroke gestures, and a small iconography language with things enabled by LLMs probably opens up all sorts of new user interaction paradigms that I (and others) might be too set in our ways to think of immediately.
I think there's a "mother of all demos" moment potentially coming soon with stuff like this, but I am NOT a UX designer and can't quite imagine it clearly enough. Maybe you can.
awwaiid|1 year ago