top | item 44900290

(no title)

silverlake | 6 months ago

It’s a glib analogy, but the goal remains the same. Today’s training sets are immense. Is there an architecture that can learn something with tiny training sets?

discuss

order

adrianwaj|6 months ago

Maybe ZephApp, when it's actually released. But would be interesting to record day-to-day conversations (face-to-face using voice recognition) to train a virtual doppelganger of myself and use it to find uncommon commonalities between myself and others.

What would someone do with a year's worth of recorded conversations? Would the other parties be identified? How would it be useful, if at all? How about analyzing the sounds/waveform rather than words? (eg BioAcousticHealth / vocal biomarkers)

Perhaps typing into a text-field is the problem right now? Maybe have a HUD in a pair of glasses. Better than getting a brain chip! Most recent or most repeated conversations most important. Could lead to a reduction in isolation within societies, in favor for "AI training parties." Hidden questions in oneself answered by a robot guru as bedtime story-telling but related to the real-world and real-events.

Smart Glasses --> Smart Asses

Vibe Coding --> Tribe Loading

Everything Probable --> Mission Impossible

rkomorn|6 months ago

I'm certainly not challenging anything you're writing, because I only have a very distant understanding of deep learning, but I do find the question interesting.

Isn't there a bit of a defining line between something like tic-tac-toe that has a finite (and pretty limited for a computer) set of possible combinations where it seems like you shouldn't need a training set that is larger than said set of possible combinations, and something more open-ended where the impact of the size of your training set mainly impacts accuracy?

dpoloncsak|6 months ago

Assuming you don't account for reflections, rotations, and 'unreachable' gamestates where a player wins and you continue to mark boxes.

It's just 3^9, right? 9 boxes, either X,O, or blank? We're only at 19,683 game states and would trim down from here if we account for the cases above.

onlyrealcuzzo|6 months ago

And hundreds of millions of years of evolutionary intelligence.

rkomorn|6 months ago

Next step in AI: teaching an LLM to think like a trilobite!