top | item 46024689

(no title)

junto | 3 months ago

This reminds me of Adrian Thompson’s (University of Sussex) 1996 paper, “An evolved circuit, intrinsic in silicon, entwined with physics,” ICES 1996 / LNCS 1259 (published 1997), which was extended in his later thesis, “Hardware Evolution: Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution, Springer, 1998”.

Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.

Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).

The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.

The final evolved solution:

- Used fewer than 40 logic cells

- Had no recognisable structure, no pattern resembling filters or counters

- Worked only on that exact FPGA and that exact silicon patch.

Most astonishingly:

The circuit depended critically on five logic elements that were not logically connected to the main path.

Removing them should not affect a digital design

- they were not wired to the output

- but in practice the circuit stopped functioning when they were removed.

Thompson determined via experiments that evolution had exploited:

- Parasitic capacitive coupling

- Propagation delay differences

- Analogue behaviours of the silicon substrate

- Electromagnetic interference from neighbouring cells

In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.

Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.

discuss

order

s4mbh4|3 months ago

Said paper : https://gwern.net/doc/ai/1997-thompson.pdf

Answering another commenter's question: yes the final result was dependent on temperature. The author did try using it over different temperatures. It only was able to operate in the region of temperatures it was trained at.

Fig. 8 goes in details.

rcxdude|3 months ago

Though the unreplicable nature of it certainly limited its usefulness. I'd also suspect it would be quite sensitive to temperature.

junto|3 months ago

I’d argue that this was a limitation of the GA fitness function, not of the concept.

Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.

The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.

paulgerhardt|3 months ago

That unreplicability between chips is actually a very, very desirable property when fingerprinting chips (sometimes known as ChipDNA) to implement unique keys for each chip. You use precisely this property (plus a lot of magic to control for temperature as you point out) to give each chip its own physically unclonable key. This has wonderfully interesting properties.

karolinepauls|3 months ago

I wonder what would happen if someone evolved a circuit on a large number of FPGAs from different batches. Each of the FPGAs would receive the same input in each iteration but the output function would be biased to expose the worst-behaving units (maybe the bias should be raised biased in later iterations when most units behave well).

mmastrac|3 months ago

Either it would generate a more robust (and likely more recognizable) solution, or it would fail to converge, really.

You may need to train on a smaller number of FPGAs and gradually increase the set. Genetic algorithms have been finicky to get right, and you might find that more devices would massively increase the iteration count