(no title)
TheGlav | 1 year ago
The text presents a logical inconsistency. Here's a breakdown of the issues:
Sequence of Events: Joe drives Sue to the university and then returns home. This part is clear.
Inconsistency: The text states that after Joe drove home, he drank tea with Sue in the kitchen. However, Sue should be at the university if Joe drove her there, not at home.
To correct this, consider one of the following revisions:
Sue returns home: If Sue returns home with Joe:
"Joe drove Sue to the university. Afterwards, he drove back home with her, and they drank tea together in the kitchen."
Joe drinks tea alone: If Sue stays at the university:
"Joe drove Sue to the university. Afterwards, he drove home again and drank tea alone in the kitchen."
Either option removes the inconsistency and clarifies the sequence of events.
This is pretty impressive.
BiteCode_dev|1 year ago
But the "dumb parrot" narrative that was quite fitting on GPT2, and maybe still useful for understanding the foundation of the tech in GPT3, is now just a demonstration of ignorance.
We don't know what they are really doing. Nobody working outside of openai knows.
But if you use it long enough, it will be obvious we have passed the fancy completion phase, there is a little logic in there. Not a lot, but it makes all the results much better already.
And definitely much better than the competition.
exe34|1 year ago
I wouldn't be too sure openai people know either. if a machine has enough moving parts, it's hard for an individual human to understand all of it. nowadays it's possible that nobody quite knows why the silicon compiler put a particular block in a particular position on the die. it just figured that's the best way to save power or space or whatever.
w1nk|1 year ago
unknown|1 year ago
[deleted]