top | item 42297739

(no title)

dogleg77 | 1 year ago

I am trying to understand what you mean here by potential to disrupt. AlphaChip addresses one out of hundreds of tasks in chip design. Macro placement is a part of mixed-size placement, which is handled just fine by existing tools, many academic tools, open-source tools, and Nvidia AutoDMP. Even if AlphaChip was commonly accepted as a breakthrough, there is no disruption here. Direct comparisons from the last 3 years show that AlphaChip is worse. Granted, Google is belittling these comparisons, but that's what you'd expect. In any case, evidence is evidence.

discuss

order

nemonemo|1 year ago

> Direct comparisons from the last 3 years show that AlphaChip is worse.

Do you have any evidence to claim this? The whole point of this thread is that the direct comparisons might have been insufficient, and even the author of "The Saga" article who's biased against the AlphaChip work agreed.

> Granted, Google is belittling these comparisons, but that's what you'd expect.

This kind of language doesn't help any position you want to advocate.

About "the potential to disrupt", a potential is a potential. It's an initial work. What I find interesting is that people are so eager to assert that it's a dead-end without sufficient exploration.

nemonemo|1 year ago

> direct comparisons in Cheng

That's the ISPD paper referenced many times in this whole thread.

> Stronger Baselines

Re: "Stronger baselines", the paper "That Chip Has Sailed" says "We provided the committee with one-line scripts that generated significantly better RL results than those reported in Markov et al., outperforming their “stronger” simulated annealing baseline." What is your take on this claim?

As for 'regurgitating,' I don’t think it helps Jeff Dean’s point either. Based on my and vighneshiyer's discussion above, describing the work as "fundamentally flawed" does not seem far-fetched. If Cheng and Kahng do not agree with this, I believe they can publish another invited paper.

On 'belittle,' my main issue was with your follow-up phrase, 'that’s what you’d expect.' It comes across as overly emotional and detracts from the discussion.

Regarding lack of follow-ups (I am aware of), the substantial resources required for this work seem beyond what academia can easily replicate. Additionally, according to "the Saga" article, both non-Jeff Dean authors have left Google until recently, but their Twitter/X/LinkedIn seem to say they came back to Google and seem to have worked on this "Sailing Chip" paper.

Personally, I hope they reignite their efforts on RL in EDA and work toward democratizing their methods so that other researchers can build new systems on their foundation. What are your thoughts? Do you hope they improve and refine their approach in future work, or do you believe there should be no continuation of this line of research?

dogleg77|1 year ago

I am referring to direct comparisons in Cheng et al and in Stronger Baselines that everyone is discussing. Let's assume your point about "might have been insufficient". We don't currently have the luxury to be frequentists, as we don't have many academic groups reporting results for running Google code. From the Bayesian perspective, that's the evidence we have.

Maybe you know more such published papers than I do, or you know the reasons why there aren't many. Somehow this lack of follow-up over three years suggests a dead-end.

As for "belittle", how would you describe the scientific term "regurgitating" used by Jeff Dean? Also, the term "fundamentally flawed" in reference to a 2023 paper by two senior professors with serious expertise and track record in the field, that for some reason no other experts in the field criticize? Where was Jeff Dean when that paper was published and reported by the media?

Unless Cheng and Kahng agree with this characterization, Jeff Dean's timing and language are counterproductive. If he ends up being wrong on this, what's the right thing to do?