top | item 46856209

(no title)

TimByte | 27 days ago

From a debugging point of view, the author's conclusion was still completely reasonable given the evidence they had

discuss

order

constantcrying|27 days ago

No it wasn't. A hardware defect so disastrous that it affects floating point computation on the neural engine, yet so minor that it does not affect any of the software on the device utilizing that hardware is exceedingly improbable.

The conclusion, that it was not the fault of the developer was correct, but assuming anything other than a problem at some point in the software stack is unreasonable.

Dylan16807|27 days ago

> yet so minor that it does not affect any of the software on the device utilizing that hardware

You're being unfair here. The showpiece software that uses that hardware wouldn't install, and almost all software ignores it.

callmeal|27 days ago

> The conclusion, that it was not the fault of the developer was correct, but assuming anything other than a problem at some point in the software stack is unreasonable.

Aah, the old "you're holding it wrong" defense.

ACCount37|27 days ago

Nah.

All neural accelerator hardware models and all neural accelerator software stacks output slightly different results. That is a truth of the world.

The same is true for GPUs and 3d rendering stacks too.

We don't usually notice that, because the tasks themselves tolerate those minor errors. You can't easily tell the difference between an LLM that had 0.00001% of its least significant bits perturbed one way and one that had them perturbed the other.

But you could absolutely construct a degenerate edge case that causes those tiny perturbances to fuck with everything fiercely. And very rarely, this kind of thing might happen naturally.