Looking at Chat-GPT and its clones, they do support some forms of markup, but they seem to make it all the more apparent that GUI advice is much harder to communicate or utilize than text.. Further many of the problems of using text, for example discoverability, are alleviated by such assistance. So is this going to swing the pendulum back, or is there another direction it will take?
muzani|2 years ago
I've been mostly just sending it screenshots and photos lately - it's able to handle that faster than text.
Instead of asking it to check your PR, screenshot the PR diff. Instead of giving it logs, screenshot the error message and the log - it's able to understand better with the color coded logs apparently. If you want it to do hackerrank, well, you can't copy paste from there during an interview, but you can take photos with your phone.
patientzero|2 years ago
It's hard for me to imagine if multiple AGI wrapped interfaces could use some other input, I.e. emulated remote desktops and screen shares, (and that could be adequately chainable for AGI output to other interface input,) but I feel like adding all of this data is ultimately making it harder to proof read and adapt something AGI proposes and then automate its repeatable usage (like taking scripts or code.)
pr07ecH70r|2 years ago
As for the GUI, Stable diffusion as far as I know, may reach this point, but not at the moment.
vpjosh|2 years ago
In other circles, TUI was known as this one:
https://tangible.media.mit.edu/
rrr_oh_man|2 years ago