Skimming the article, this would seem like another case of the explanainability problem, no? the conversation with the llm makes the results "easier to understand" (which is a requirement for real use-cases) but loses accuracy? Still good if we have more studies confirming this tradeoff to be the case.
superheropug|21 days ago
Cynddl|21 days ago
Co-author here and happy to answer questions!