top | item 47096893

(no title)

mncharity | 9 days ago

There's an old idea of adaptive media. Imagine a video drama that's composed of a graph of clips, like an old "choose your own adventure" book ("Do you X? If yes, goto page 45"). With gaze tracking, one can "hmm, the viewer is more focused on character A than B... so we'll give clips and subplots with more A".

Now, when reading, the eye moves in little jumps - saccades. They last 10's of ms, the eye is blind during them, and with high-quality tracking, you know quite early just where that foveal peephole is going to land. So handwave a budget of a few ms for trajectory analysis, a few for 200 Hz rendering latency, and you still have 10-ish ms to play with. At 20k tok/s, that's 200 tok.

So perhaps one might JIT the next sentence, or the topic of the next paragraph, or the entire nature of the document, based on the user's attention. Imagine a universal document - you start reading, and you find the document is about, whatever you wanted it to be about?

discuss

order

awwaiid|9 days ago

Generative TikTok for words

mncharity|6 days ago

Hmm... TikTok has apparently long had "text enhanced with background" genres, and TIL, text posts since 2023. So text is ok. But non-independent items? For generative storytelling, "here is a next paragraph for the story", swipe left/right might work? Want to avoid "I don't much like this new paragraph, but I'm afraid to lose it and be stuck with something worse". Swipe left/right and up for continue? Swipe down to revisit old choices? Maybe present new text bolded, appended to old text, for context. Or a "next page of a picture book" idiom. A text field for direct creative or editorial intervention - speech to text. Maybe a side channel input for "story and background should now be soporific". Generative bedtime stories, but incrementally collaboratively created... Thanks for the brainstorming prompt.