(no title)
afturner | 3 years ago
A few little things are weird (I can exec into a stopped container for example) but I was able to start another container and persist files.
Wild. This is unbelievable. Can anyone please explain to me why this isn't as wildly groundbreaking as this seems?
zerocrates|3 years ago
Obviously what's happening is much more complex, and impressive, than just spitting back the exact things it's seen, as it can include the specific context of the previous prompts in its responses, among other things, but I don't know that it's necessarily different in kind than the stuff people ask it to do in terms of "write X in the style of Y."
None of this is to say it's not impressive. I particularly have been struck by the amount of "instruction following" the model does, something exercised a lot by the prompts people are using in this thread and the article. I know OpenAI had an article out earlier this year about their efforts and results at that time specifically around training the models to follow instructions.
thepasswordis|3 years ago
It is and people haven't realize it yet.
drivers99|3 years ago
isp|3 years ago
It is literally years - possibly decades - ahead of my prior expectations.
XCSme|3 years ago
unknown|3 years ago
[deleted]
Aeolun|3 years ago
It’s really hard to utilize if the results aren’t consistent.
moffkalast|3 years ago
plutonorm|3 years ago
TaupeRanger|3 years ago
nomel|3 years ago