top | item 30933529

(no title)

gwf | 3 years ago

Your second group represents the core "inner loop" of about a thousand revolutionary applications. Take the basic capability of translating image->text->speech (and the reverse), install it on a wearable device that can "see" an environment, and add domain-specific agents. From this setup, you're not too far away from having an AI that can whisper guidance into your ear like a co-pilot, enabling scenarios like:

1. step-by-step guidance for a blind person navigating the use of a public restroom.

2. an EMS AI helping you to save someone's life in an emergency.

3. an AI coach that can teach you a new sport or activity.

4. an omnipresent domain-expert that can show you how to make a gourmet meal, repair an engine, or perform a traditional tea ceremony.

5. a personal assistant that can anticipate your information need (what's that person's name? where's the exit? who's the most interesting person here? etc.) and whisper the answer in your ear just as you need it.

Now, add all of the above to an AR capability where you can now think or speak of something interesting and complex, and have it visualized right before your eyes. With this capability, I could augment my imagination with almost super-human capabilities that allow one to solve complex problems almost as if it was an internal mental monologue.

All of these scenarios are just a short hop from where were at now, so mark my words: we will have "borgs" like those described above long before we reach anything like general AI.

discuss

order

lkbm|3 years ago

These are good examples of what we're getting close to, but I'd add that Copilot is already an extremely helpful tool for coding. I don't blindly trust its output, but its suggestions are what I want often enough to save a lot of typing.

I still have to do all the hard thinking, but once I figure out what I want written and start typing, Copilot will spit out a good portion of the contextually-obvious lines of code.