(no title)
patientzero | 2 years ago
It's hard for me to imagine if multiple AGI wrapped interfaces could use some other input, I.e. emulated remote desktops and screen shares, (and that could be adequately chainable for AGI output to other interface input,) but I feel like adding all of this data is ultimately making it harder to proof read and adapt something AGI proposes and then automate its repeatable usage (like taking scripts or code.)
muzani|2 years ago
One of my other top use cases for it is getting it to read docs. It will give me step by step instructions to say, deactivate Facebook or do whatever with AWS. Sometimes I get stuck so I send it a screenshot and it'll tell me that the button is actually a tab, or on the left, or I need to scroll down, etc.
Chained data will likely have a hard time. Most of these wrapper startups will probably have a hard time. I tried to make an AI wrapper startup but I couldn't. It's a rare time where the unicorn with huge teams is actually moving faster than the solo devs. It's almost like they were aided by AI or something.
patientzero|2 years ago
I think it has always been the case that a GUU is bad for communication of training about it, in a world where everyone is like an autodidactic and gets very little help a GUI could win on other stuff related to the user figuring out what to do or recovering a memory of his to do it.
I'm not really focusing here on the GPT interface itself but if it could wrap all sorts of interfaces then text or GUI ones could be replaced by rarely using them directly. But I think such AI interfaces would put themselves at a disadvantage not working with text as a medium between them.