(no title)
GeneralMaximus | 2 months ago
The results are ... okay. The biggest problem is that I can't run some of the largest models on my hardware. The ones I'm running (mostly Qwen 3 at different numbers of parameters and quantization levels) often produce hallucinations. Overall, I can't say this is a practical or useful setup, but I'm just playing around so I don't mind.
That said, I doubt SOTA models would be that much better at this task. IMO LLM generated summaries and insights are never very good or useful. They're fine for assessing whether a particular text is worth reading, but they often extract the wrong information, or miss some critical information, or over-focus on one specific part of the text.
No comments yet.