(no title)
JSavageOne | 2 years ago
For example I've noticed that a lot of the time when I ask ChatGPT a coding question it might get 90% of the answer. When I tell it what to fix and/or add, it usually gets the answer. I wonder if they're using these refined answers to fine-tune those original prompts.
I wonder how the LLM interacts with other software like the calculator or Python interpreter. It would be great if this were modular so that the LLM OS could be more like Unix than Windows which is what OpenAI seems to be trying to emulate.
Ultimately though it seems to me like AGI is fairly straightforward from here. Just train on more quality data - in particular enabling the machine to generate this training data, increase parameter size, and the LLM just gets better and better. Seems like we don't even need any new major breakthroughs to create something resembling AGI.
leobg|2 years ago
throwuxiytayq|2 years ago
rany_|2 years ago