They also tried to heal the damage, to partial avail. Besides, it's science: you need to test your hypotheses empirically. Also, to draw attention to the issue among researchers, performing a study and sharing your results is possibly the best way.
Yeah I mean I get that, but surely we have research like this already. "Garbage in, garbage out" is basically the catchphrase of the entire ml field. I guess the contribution here is that "brainrot"-like text is garbage which, even though it seems obvious, does warrant scientific investigation. But then that's what the paper should focus on. Not that "LLMs can get 'brain rot'".
I guess I don't actually have an issue with this research paper existing, but I do have an issue with its clickbait-y title that gets it a bunch of attention, even though the actual research is really not that interesting.
nazgul17|4 months ago
Version467|4 months ago
I guess I don't actually have an issue with this research paper existing, but I do have an issue with its clickbait-y title that gets it a bunch of attention, even though the actual research is really not that interesting.
yieldcrv|4 months ago
just use a different model?
dont train it with bad data and just start a new session if your RAG muffins went off the rails?
what am I missing here
Sxubas|4 months ago
And while this result isn't extraordinary, it definitely creates knowledge and could close the gap to more interesting observations.
Perz1val|4 months ago