(no title)
negativeonehalf | 1 year ago
> the most damning evidence you could possibly have is that you cannot reproduce the results
Seems like the "whistleblower" didn't even have that. From the paper by the AlphaChip authors: "We provided the committee with one-line scripts that generated significantly better RL results than those reported in Markov et al., outperforming their “stronger” simulated annealing baseline. We still do not know how Markov and his collaborators produced the numbers in their paper."
AshamedCaptain|1 year ago
> [whistleblower] stated he did not have evidence to support his suspicion of fraud, that he needed to cross a much larger threshold to prove his suspicion
is exactly saying what I was saying.
But in addition, this is hearsay, "quoted" only by Google's rep. It was never actually mentioned by the whistleblower. It has exactly 0 value. Using this quote at face value is intentionally misleading no matter which way you put it. They're literally the defendants - they're basically quoting themselves.
> Seems like the "whistleblower" didn't even have that.
Before he was fired?
Also, I find it funny that for all the talk of the crisis of reproducibility, anyone would trust for a second the authors of the paper more than the attempts done by a 3rd party (and literally done by one of the most important names of the entire floorplanning academic community, to begin with). At least the EDA community has used some benchmarks that have been often used by other papers, allowing some resemblance of a comparison, and a criticism that "these ancient benchmarks do not reflect our holy ways or whatever" is a criticism that maybe I also share; but it's a hoop that everyone who has ever published any such paper (including all the big names) has had to pass in order to be published, unless apparently if you are called Google and publish in Nature.
Nature doesn't exactly have an stellar track record ensuring Google's results are verifiable ... https://retractionwatch.com/2024/05/14/nature-earns-ire-over...
Frankly, at this point I don't even know why would anyone bother with Google's paper. It feels as if they've managed to alienate the entire floorplanning academic community, and whenever I read one of Google's "responses" I see why.
wholehog|1 year ago
> Nature doesn't exactly have an stellar track record ensuring Google's results are verifiable ... https://retractionwatch.com/2024/05/14/nature-earns-ire-over...
Google open-sourced AlphaFold-3 a week ago: https://www.nature.com/articles/d41586-024-03708-4
Google infrastructure is weird and takes significant work to disentangle from a given project, so I'm not surprised it took them six months to open-source it.
> Before he was fired?
I don't know how long someone should expect to remain employed when making baseless allegations of scientific misconduct against his colleagues instead of doing actual work. Again, he did not have evidence to support his suspicion of fraud, and he admitted this at the time.
> most important names of the entire floorplanning academic community
If the old guard struggles with ML basics, what can the AlphaChip authors be expected to do about this? This pattern is unfortunately common when ML comes for a new field -- some researchers adapt and build, and others fail and complain (or worse, don't really even try).
> it's a hoop that everyone who has ever published any such paper (including all the big names) has had to pass in order to be published
If the hoop doesn't match what modern chip design needs, we shouldn't expect researchers to hop through it. No one is comparing Vision Transformers against AlexNet on MNIST. Meanwhile, AlphaChip is already used in production to make real layouts for real chips.
negativeonehalf|1 year ago
They open-sourced AlphaFold-3 a week ago, so I'm not sure how you can say this is part of a pattern of being overly closed: https://www.nature.com/articles/d41586-024-03708-4
> Before he was fired?
I don't know how long someone should expect to remain employed when making baseless allegations of scientific misconduct against his colleagues instead of doing actual work. Again, he did not have evidence to support his suspicion of fraud, and he admitted this at the time.
I'm sorry that the "most important names of the entire floorplanning academic community" are struggling with ML basics, but it is what it is. The "Chip Has Sailed" paper makes this pretty clear. This pattern is unfortunately common when ML comes for a new field -- some researchers adapt and build, and others fail and complain (or worse, don't really even try).
> it's a hoop that everyone who has ever published any such paper (including all the big names) has had to pass in order to be published
If the hoop doesn't match what modern chip design needs, we shouldn't expect researchers to hop through it. No one is comparing Vision Transformers against AlexNet on MNIST. Meanwhile, AlphaChip is already used in production to make real layouts for real chips. TPU is a big deal!
I think the one thing we agree on is that this field desperately needs large public benchmarks that are representative of modern chip design.