(no title)
quirino | 3 months ago
In the IEEEXTREME university programming competition there are ~10k participating teams.
Our university has a quite strong Competitive Programming program and the best teams usually rank in the top 100. Last year a team ranked 30 and it's wasn't even our strongest team (which didn't participate)
This year none of our teams was able to get in the top 1000. I would estimate close to 99% of the teams in the Top 1000 were using LLMs.
Last year they didn't seem to help much, but this year they rendered the competition pointless.
I've read blogs/seen videos of people who got in the AOC global leaderboard last year without using LLMs, but I think this year it wouldn't be possible at all.
letmetweakit|3 months ago
Aurornis|3 months ago
Cheating is rampant anywhere there’s an online competition. The cheaters don’t care about respecting others, they get a thrill out of getting a lot of points against other people who are trying to compete.
Even in the real world, my runner friends always have stories about people getting caught cutting trails and all of the lengths their running organizations have to go through now to catch cheaters because it’s so common.
The thing about cheaters in a large competition is that it doesn’t take many to crowd out the leaderboard, because the leaderboard is where they get selected out. If there are 1000 teams competing and only 1% cheat, that 1% could still fill the top 10.
hoherd|3 months ago
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
evil-olive|3 months ago
reminds me of something I read in "I’m a high schooler. AI is demolishing my education." [0,1] emphasis added:
> During my sophomore year, I participated in my school’s debate team. I was excited to have a space outside the classroom where creativity, critical thinking, and intellectual rigor were valued and sharpened. I love the rush of building arguments from scratch. ChatGPT was released back in 2022, when I was a freshman, but the debate team weathered that first year without being overly influenced by the technology—at least as far as I could tell. But soon, AI took hold there as well. Many students avoided the technology and still stand against it, but it was impossible to ignore what we saw at competitions: chatbots being used for research and to construct arguments between rounds.
high school debate used to be an extracurricular thing students could do for fun. now they're using chatbots in order to generate arguments that the students can just regurgitate.
the end state of this seems like a variation on Dead Internet Theory - Team A is arguing the "pro" side of some issue, Team B is arguing the "con" side, but it's just an LLM generating talking points for both sides and the humans acting as mouthpieces. it still looks like a "debate" to an outside observer, but all the critical thinking has been stripped away.
0: https://www.theatlantic.com/technology/archive/2025/09/high-...
1: https://archive.is/Lda1x
jvanderbot|3 months ago
Imagine the shitshow that gaming would be without any kind of anti-cheat measures, and that's the state of competitive programming.
zulban|3 months ago
They're just different types of fun. The problem is if one type of fun is ruined by another.
Isamu|3 months ago
zerr|3 months ago
Ekaros|3 months ago
With products I want actual correctness. And not something thrown away.
matsemann|3 months ago
quirino|3 months ago
The Regional Finals and World Finals are in a single venue with a very controlled environment. Just like the IOI and other major competitions.
National High School Olympiads have been dealing with bigger issues because there are too many participants in the first few phases, and usually the schools themselves host the exams. There has been rampant cheating. In my country I believe the organization has resorted to manually reviewing all submissions, but I can only see this getting increasingly less effective.
This year the Canadian Computing Competition didn't officially release the final results, which for me is the best solution:
> Normally, official results from the CCC would be released shortly after the contest. For this year’s contest, however, we will not be releasing official results. The reason for this is the significant number of students who violated the CCC Rules. In particular, it is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help. As such, the reliability of “ranking” students would neither be equitable, fair, or accurate.
Available here: [PDF] https://cemc.uwaterloo.ca/sites/default/files/documents/2025...
Online competitions are just hopeless. AtCoder and Codeforces have rules against AI but no way to enforce them. A minimally competent cheater is impossible to detect. Meta Hacker Cup has a long history and is backed by a large company, but had its leaderboard crowded by cheaters this year.
armchairhacker|3 months ago
I don’t see why competitive debate or programming would be different. (But I understand why a fair global leaderboard for AOC is no longer feasible).
ewidar|3 months ago
gregdeon|3 months ago