top | item 44766740

(no title)

bmc7505 | 7 months ago

> The solvers participating in this track will be executed with a wall-clock time limit of 1000 seconds. Each solver will be run an a single AWS machine of the type m6i.16xlarge, which has 64 virtual cores and 256GB of memory.

For comparison, an H100 has 14,592 CUDA cores, with GPU clusters measured in the exaflops. The scaling exponents are clearly favorable for LLM training and inference, but whether the same algorithms used for parallel SAT would benefit from compute scaling is unclear. I maintain that either (1) SAT researchers have not yet learned the bitter lesson, or (2) it is not applicable across all of AI as Sutton claims.

discuss

order

No comments yet.