top | item 41812724 (no title) tj-teej | 1 year ago If anyone is curious, a Meta Data Scientist published a great piece about how the facts about what LLMs are actually doing (and therefore able to do) and how it's papered over by using chat bots. It's a long but very engaging read.https://medium.com/@colin.fraser/who-are-we-talking-to-when-... discuss order hn newest grey-area|1 year ago Great article which really explores why we fall for llms and think they are doing a lot more thinking than they are.Thanks. non_sequitur|1 year ago this is a good article but very outdated - none of the examples he cites are relevant anymore tivert|1 year ago That article is fantastic. rahimnathwani|1 year ago This article is long but doesn't mention key concepts like instruction tuning.I'd suggest the Llama paper as a more worthwhile source. grey-area|1 year ago It does talk about openai explicitly instruction tuning the llm to try to constrain the output and the limitations of such approaches. load replies (1)
grey-area|1 year ago Great article which really explores why we fall for llms and think they are doing a lot more thinking than they are.Thanks.
non_sequitur|1 year ago this is a good article but very outdated - none of the examples he cites are relevant anymore
rahimnathwani|1 year ago This article is long but doesn't mention key concepts like instruction tuning.I'd suggest the Llama paper as a more worthwhile source. grey-area|1 year ago It does talk about openai explicitly instruction tuning the llm to try to constrain the output and the limitations of such approaches. load replies (1)
grey-area|1 year ago It does talk about openai explicitly instruction tuning the llm to try to constrain the output and the limitations of such approaches. load replies (1)
grey-area|1 year ago
non_sequitur|1 year ago
tivert|1 year ago
rahimnathwani|1 year ago
I'd suggest the Llama paper as a more worthwhile source.
grey-area|1 year ago