(no title)
junto | 3 months ago
Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.
The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.
tremon|3 months ago
Yes, we have better tooling today. If you use today's 14A machinery to produce a 1µ chip like the 80386, you will get amazingly high yields, and it will probably be accurate enough that even these analog circuits are reproducible. But the analog effects become more unpredictable as the node size decreases, and so will the variance in your analog circuits.
Also, contrary to what you said: the GA fitness process does not design for robustness and generality. It designs for the specific chip you're measuring, and you're measuring post-production. The fact that it works for reprogrammable FPGAs does not mean it translates well to mass production of integrated circuits. The reason we use digital circuitry instead of analog is not because we don't understand analog: it's because digital designs are much less sensitive to production variance.
junto|3 months ago
We’re seeing this shift already in software testing around GenAI. Trying to write a test around non-deterministic outcomes comes with its own set of challenges, so we need to plan can deterministic variances, which seems like an oxymoron but is not in this context.