(no title)
jauws | 28 days ago
More broadly, crowdsourced data where human inputs are fundamentally diverse lets us study problems that static benchmarks can't touch. The recent "Artificial Hivemind" paper (Jiang et al., NeurIPS 2025 Best Paper) showed that LLMs exhibit striking mode collapse on open-ended tasks, both within models and across model families, and that current reward models are poorly calibrated to diverse human preferences. Fiction at scale is exactly the kind of data you need to diagnose and measure this. You can see where models converge on the same tropes, whether "creative" behavior actually persists or collapses into the same patterns, and how novelty degrades over time. That signal matters well beyond fiction, including domains like scientific research where convergence versus originality really matters.
No comments yet.