(no title)
junto | 3 months ago
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
s4mbh4|3 months ago
Answering another commenter's question: yes the final result was dependent on temperature. The author did try using it over different temperatures. It only was able to operate in the region of temperatures it was trained at.
Fig. 8 goes in details.
rcxdude|3 months ago
junto|3 months ago
Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.
The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.
paulgerhardt|3 months ago
karolinepauls|3 months ago
mmastrac|3 months ago
You may need to train on a smaller number of FPGAs and gradually increase the set. Genetic algorithms have been finicky to get right, and you might find that more devices would massively increase the iteration count
unknown|3 months ago
[deleted]