(no title)
jal278
|
2 years ago
The long-term impact of this paper has confused me from a technical lens, although I get it from a political lens. I'm glad it brings up the risks from LLMs but makes technical/philosophical claims which seemed poorly supported and empirically have not held up -- imo because they chose not to engage with RLHF at all (which was deployed through GPT-3 at the time; and enables grounding + getting around 'parrotness'), and uses over-the-top language ("stochastic parrot") which seems very poorly to capture what it feels like to meaningfully engage with e.g. models like GPT-4.
No comments yet.