top | item 39233370

(no title)

guizzy | 2 years ago

It's quite unlikely to hallucinate in this because it's not asked to answer a question from information it was trained on; the facts in the summaries are fed in the context of the request.

That's not to say it will always get everything right of course; in my experimentation with LLM-powered summarization of news articles, I find the thing it would more often struggle with is quote attribution. The way some writers formulate who said what in a conversation sometimes confused the models I use (mostly Mixtral these days, which is about GPT 3.5 level), and it would claim that someone said something that I knew this person definitely did not say, and I would check the actual article and it turned out the journalist said that thing and the interviewee said the opposite, but the LLM thought it was the interviewee who said it.

discuss

order

No comments yet.