top | item 44763838

(no title)

bmc7505 | 7 months ago

The difference is that SAT/SMT solvers have primarily relied on single-threaded algorithmic improvements [1] and unlike neural networks, we have not [yet] discovered a uniformly effective strategy for leveraging additional computation to accelerate wall-clock runtime. [2]

[1]: https://arxiv.org/pdf/2008.02215

[2]: https://news.ycombinator.com/item?id=36081350

discuss

order

PaulHoule|7 months ago

RETE family algorithms did turn out to be somewhat parallelizable, enough to get a real speed-up on ordinary multicore CPUs. There was an idea in the 1980s that symbolic AI would be massively parallelizable that turned out to be a disappointment.

https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Syst...

bmc7505|7 months ago

You could argue that since automatic differentiation and symbolic differentiation are equivalent, [1] symbolic AI did succeed by becoming massively parallelizable, we just needed to scale up the data and hardware in kind.

[1]: https://arxiv.org/pdf/1904.02990

yorwba|7 months ago

> [2]

In the comments, zero_k posted a link to the SAT competition's parallel track. The 2025 results page is here: https://satcompetition.github.io/2025/results.html Parallel solvers consistently score lower (take less time) than single-threaded solvers, and solve more instances within the time limit. Probably the speedup is nowhere near proportional to the amount of parallelism, but if you just want to get results a little bit faster, throwing more cores at the problem does seem like it generally works.

bmc7505|7 months ago

> The solvers participating in this track will be executed with a wall-clock time limit of 1000 seconds. Each solver will be run an a single AWS machine of the type m6i.16xlarge, which has 64 virtual cores and 256GB of memory.

For comparison, an H100 has 14,592 CUDA cores, with GPU clusters measured in the exaflops. The scaling exponents are clearly favorable for LLM training and inference, but whether the same algorithms used for parallel SAT would benefit from compute scaling is unclear. I maintain that either (1) SAT researchers have not yet learned the bitter lesson, or (2) it is not applicable across all of AI as Sutton claims.