top | item 40900824

(no title)

SuperscalarMeme | 1 year ago

I'll give you an alternate take: the compute power available to EDA software has been roughly scaling at the same rate as transistors on a die. So the complexity of the problem relative to compute power available has remained somewhat constant. So standard cell design remains an efficient method of reducing complexity of the problems EDA tools have to solve.

discuss

order

kens|1 year ago

That's an interesting thought. However, it assumes that the problem scales with the number of transistors, i.e. O(N). I expect that the complexity of place and route algorithms is worse than O(N), which means the algorithms will fall behind as the number of transistors increases. (Technically, the algorithms are NP-complete so you're doomed, but what matters is the complexity of the heuristics.)

morsch|1 year ago

It's worse than that, isn't it? Not only are the algorithms presumably super linear, the transistor count has been increasing exponentially, but the compute power per transistor has been decreasing over time. See e.g. [1].

Although I suppose if the problem is embarrassingly parallel, the SpecINT x #cores curves might just about reach the #transistors curve.

[1] https://substackcdn.com/image/fetch/w_1272,c_limit,f_webp,q_... via https://www.semianalysis.com/p/a-century-of-moores-law figure 1