Incorrect Pytorch gradients with Apple MPS backend...
Yep this kind of thing can happen. I found and reported incorrect gradients for Apple's Metal-backed tensorflow conv2d in 2021 [1].
(Pretty sure I've seen incorrect gradients with another Pytorch backend, but that was a few years ago and I don't seem to have raised an issue to refer to... )
One might think this class of errors would be caught by a test suite. Autodiff can be tested quite comprehensively against numerical differentiation [2]. (Although this example is from a much simpler lib than Pytorch, so I could be missing something.)
Yeah, luckily, you can unit tests these and fix them. They are not concurrency bugs (again, luckily).
BTW, numeric differentiation can only be tested very limitedly (due to algorithmic complexity when you doing big matrix). It is much easier / effective to test against multiple implementations.
Not that I understand much of what they say, but it appears there are a lot of correctness bugs in pytorch that are flying under the radar, probably having a measurable impact on the results of model quality.
It would be interesting to see model weights comparison of the same model trained with the two to see if they exhibit meaningfully different behavior.
When we update Torch versions, we're required to run a test where the only change is the library change and compare the outputs. We saw a measurable improvement in accuracy by upgrading from torch 2.4.x to 2.7.x.
> Not that I understand much of what they say, but it appears there are a lot of correctness bugs in pytorch that are flying under the radar, probably having a measurable impact on the results of model quality.
Do you have any links to public thoughts about this? As if it was true, could mean a lot of research could be invalidated, so obviously would make huge news.
Also feels like something that would be relatively easy to make reproducible test cases from, so easy to prove if that's true or not.
And finally if something is easy to validate, and would make huge news, I feel like someone would already have attempted to prove this, and if it was true, would have published something a long time ago.
That's why project like nanochat are really cool, you can get around the limitations of such gigantic libraries, while at the same time understanding the underlying architecture.
This is a great write up and I’d love to see more like it. Debugging this sort of thing in the megatron->pytorch->CUDA stack is what my team spends more than half of their time on as an ML research team.
Only slightly related, but how common are bugs in GPUs and/or CUDA? I'm currently on Day 5 of trying to debug why my GPT-OSS implementation (not using PyTorch) I've made from scratch isn't working correctly, and while I have it somewhat working with some naive and slow methods, I'm now doing an implementation of the tensor cores and have been just stuck for 2-3 days because of some small numerical difference I can't understand why it's happening.
Every day I'm getting closer to believing this is some sort of hardware bug in Blackwell or in CUDA itself, but as we know, the bug is (almost) never in the compiler or in the hardware. Until it is...
They exist, but they're not that common (give or take the "expected" numerical deviations based on the order of summation and whatnot, which can both be nontrivial and propagate error further).
Something I recommend doing, the best time being the start of the project and the second best time being now, is adding numerical gradient checking tests to all operations. You will make mistakes in your kernels from time to time, and it's valuable to know at a glance where those mistakes are.
Mind you, it's possible to write both the forward pass and the backward pass in a way that's wrong but compatible. An additional layer of checks I like to add is a dead-simple implementation of all algorithms -- no vectorization, no fancy blocking or re-orderings, nothing. Compare results to the simple implementation.
It sounds like a lot of work, but writing an optimized kernel is much slower than the numerical gradient checking and the simple kernel, and given how in numerical code it's basically impossible to identify the source of a bug without doing the equivalent of all of those checks, it only takes one bug in the whole project for the effort to pay off.
Consumer-visible hardware bugs are extremely uncommon nowadays. There's approximately 10x as many people working in design verification as actual hardware design.
I say "consumer-visible" because the bugs still exist and people who can catch them early get promoted quickly and paid a lot. It's very exciting work if you can get it, since you really have to understand the full GPU to break it.
Great work hunting the bug down the stack. The writeup is top notch. I wish I documented some of the nastiest bugs I found in such detail.
Funnily, only a few days ago I was thinking about just how far the field has come since 2014 or so when you'd build a computational graph, initialize weights manually and so on, versus now, where you just have to use a library like Ultralytics or HuggingFace most of the time. Then I thought about just how many deep, undetected bugs there would be in this mountain of abstraction. Bugs that make the computation invalid.
In my case, it was easier to identify: I had another implementation of my loss function before that did not use masked_select. But then I thought I can be clever and use masked_select to take out the non-masked frames and calculate the loss only on those. But it wasn't working. Also, it only happened for some models, not for all. It turns out, it was always happening when the data coming out of the model was non-contiguous.
I think the bugs with non-contiguous data are not so uncommon. I wonder how much of that we still have.
Apple used to contribute to the PyTorch MPS backend, but decided to create their own framework (MLX) instead, fragmenting the ecosystem for very little gain. (MLX is basically PyTorch, but invented-at-apple)
Meta, the creator and main contributor to PyTorch, does not use Macs for their day-to-day ML work (they focus on GPUs and CPUs), so the MPS backend is sadly incomplete and has errors like the one you see here.
MLX and MPS are 2 completely different teams within Apple. It's more like MPS team doesn't have control or visibility into PyTorch roadmap and can only contribute so much from their side.
none of this is correct (except the part where FB doesn't use apple in prod).
EDIT: for the downvoters - i'll repeat, this is not a correct assessment of the relationship between Apple and PyTorch. but you can keep downvoting if you want <shrug>
Sounds like Placeholder should somehow be split into InputPlaceholder and OutputPlaceholder, based on the usage.
Even identical classes could help future folks know copying back is platform specific: “hm, we wrote to an OutputPlaceholder but didn’t read back from it, that seems wrong”.
Reminds me of the largest AJAX app I worked on, back when jquery was still hot and IE6 still existed as a problem.
The landing page in our app used jqueryUI’s drag and drop support, back around the time they declared bankruptcy on the confusing buggy code and wouldn’t even accept bug fixes because they were replacing it component by component (which was taking almost 3x as long as predicted). We had columns you could drag items between but they had a max height and scroll bars and it turned out jqueryUI would let you drag items into different rows if the overflow area for adjacent drag targets overlapped your row.
The person who found it couldn’t fix it. The other fixer couldn’t fix it. I diagnosed it but the spaghetti code was a recursive mess and I could not find a spot where I could fix it. Especially given I couldn’t send in a patch to them.
So I spent half of my free time the last day of every (2 week) sprint for almost six months before I finally found a small function I could monkey patch to wrap it in a short circuit check for clipping region. I spent maybe 20,30 hours on this, a lot of it just getting back to the same situation to debug. But it felt like it took forever to fix it.
The short circuit also made drag and drop faster, which was just getting in the edge of distracting. Particularly on a crowded page.
I remember many similar cycles of having different browsers open side-by-side, and trying to pinpoint (without the developer tools we know and love today) the exact reason why one border was one pixel in one browser, and two pixels in the other, throwing the whole layout off.
Also remembering when Firebug for Firefox appeared, and made so many things so much easier. Suddenly things that took hours took days, and it was so much easier when you had some introspection tools.
Kudos to Elana for a) such a thorough deep dive and b) a great write-up of it. I understand very little about ML libraries, but was able to follow this easily :)
Great write-up, but I admit that I found the interweaving of human and AI-written content/headlines/summaries pretty distracting. I kept on wanting to scroll past, but had to keep on backtracking to find the human thread again.
I think if you want to give your reader a quick intro to, e.g., what is the Adam optimizer, a simple link to Wikipedia is fine. No need to copy-paste an AI tutorial on Adam into the blog post.
To be fair, you can easily click to hide those expanded sections. I found it a neat compromise between "Link to (usually) obtuse Wikipedia article" which aren't usually written for laypersons, and forcing me to read through stuff I already know about, I just hid the sections I already understood but found value in the others.
Just read the article and it instantly brought back memories of when I spent days trying to fix a broken loss in a PyTorch model. Turned out I had passed the wrong optimizer parameters. I ended up digging all the way from the model to the CUDA kernel. Debugging took longer than training.
Is this why I cannot seem to fine tune YOLO models on a Apple M4? The loss hits nan after a few batches. Same code using Windows PC and Google Colab CPU and GPU is fine...
haha oops yeah the other comment is correct- that was just a mistake
I originally wrote "vanilla" there but didn't want to repeat that word twice in a row so swapped it for "standard" without realizing it now looked like the SGD acronym
just fixed that to avoid confusion- thanks for pointing it out!
I think that z-order is used to increase speed of loading texture from RAM. But this is not an issue in ML. You usually have all your model weights directly loaded into your GPU memory and you do not need caching for your inputs. At the same time, the entire stack for ML is heavily optimized for other memory layouts already.
If I understand correctly, the root cause of the bug was improper use of object-oriented programming. A `Placeholder` object behaves differently depending on how it was created, and requires the user to have this awareness. The check `if is_continuous` should only ever exist inside the code of the `Placeholder` class.
This is a minor quibble but I don't really like the author calling Placeholder a leaky abstraction. It's just straight up an incomplete abstraction that only handles inputs but not outputs. As the author says, Placeholder should know about the difference and do the copy-back itself.
Another reason people use Nvidia. You know that Nvidia is the most used backend and the most likely to have this kind of bug found and fixed before you encounter it.
montebicyclelo|4 months ago
Yep this kind of thing can happen. I found and reported incorrect gradients for Apple's Metal-backed tensorflow conv2d in 2021 [1].
(Pretty sure I've seen incorrect gradients with another Pytorch backend, but that was a few years ago and I don't seem to have raised an issue to refer to... )
One might think this class of errors would be caught by a test suite. Autodiff can be tested quite comprehensively against numerical differentiation [2]. (Although this example is from a much simpler lib than Pytorch, so I could be missing something.)
[1] https://github.com/apple/tensorflow_macos/issues/230
[2] https://github.com/sradc/SmallPebble/blob/2cd915c4ba72bf2d92...
gcr|4 months ago
liuliu|4 months ago
BTW, numeric differentiation can only be tested very limitedly (due to algorithmic complexity when you doing big matrix). It is much easier / effective to test against multiple implementations.
dangoodmanUT|4 months ago
Not that I understand much of what they say, but it appears there are a lot of correctness bugs in pytorch that are flying under the radar, probably having a measurable impact on the results of model quality.
It would be interesting to see model weights comparison of the same model trained with the two to see if they exhibit meaningfully different behavior.
coredog64|4 months ago
CaptainOfCoit|4 months ago
Do you have any links to public thoughts about this? As if it was true, could mean a lot of research could be invalidated, so obviously would make huge news.
Also feels like something that would be relatively easy to make reproducible test cases from, so easy to prove if that's true or not.
And finally if something is easy to validate, and would make huge news, I feel like someone would already have attempted to prove this, and if it was true, would have published something a long time ago.
3abiton|4 months ago
dapperdrake|4 months ago
TLDR: Python gevent compiled with -Ofast messes up x87 floating point unit state. Bad for PyTorch.
doctorpangloss|4 months ago
> The exact same float32 code updates weights on CPU but fails on MPS
It's MPS... Exactly zero research is being impacted. Why doesn't the $3.9T corporation contribute more to torch?
Q6T46nT668w6i3m|4 months ago
jebarker|4 months ago
ddelnano|4 months ago
CaptainOfCoit|4 months ago
Every day I'm getting closer to believing this is some sort of hardware bug in Blackwell or in CUDA itself, but as we know, the bug is (almost) never in the compiler or in the hardware. Until it is...
hansvm|4 months ago
Something I recommend doing, the best time being the start of the project and the second best time being now, is adding numerical gradient checking tests to all operations. You will make mistakes in your kernels from time to time, and it's valuable to know at a glance where those mistakes are.
Mind you, it's possible to write both the forward pass and the backward pass in a way that's wrong but compatible. An additional layer of checks I like to add is a dead-simple implementation of all algorithms -- no vectorization, no fancy blocking or re-orderings, nothing. Compare results to the simple implementation.
It sounds like a lot of work, but writing an optimized kernel is much slower than the numerical gradient checking and the simple kernel, and given how in numerical code it's basically impossible to identify the source of a bug without doing the equivalent of all of those checks, it only takes one bug in the whole project for the effort to pay off.
QuadmasterXLII|4 months ago
E(loss).cuda() <= E(loss.cuda())
jjmarr|4 months ago
I say "consumer-visible" because the bugs still exist and people who can catch them early get promoted quickly and paid a lot. It's very exciting work if you can get it, since you really have to understand the full GPU to break it.
Good luck!!
saagarjha|4 months ago
farhanhubble|4 months ago
Funnily, only a few days ago I was thinking about just how far the field has come since 2014 or so when you'd build a computational graph, initialize weights manually and so on, versus now, where you just have to use a library like Ultralytics or HuggingFace most of the time. Then I thought about just how many deep, undetected bugs there would be in this mountain of abstraction. Bugs that make the computation invalid.
albertzeyer|4 months ago
I also had a very similar bug a while ago, broken gradients due to non-contiguous data for masked_select: https://github.com/pytorch/pytorch/issues/99638
In my case, it was easier to identify: I had another implementation of my loss function before that did not use masked_select. But then I thought I can be clever and use masked_select to take out the non-masked frames and calculate the loss only on those. But it wasn't working. Also, it only happened for some models, not for all. It turns out, it was always happening when the data coming out of the model was non-contiguous.
I think the bugs with non-contiguous data are not so uncommon. I wonder how much of that we still have.
ipsum2|4 months ago
Meta, the creator and main contributor to PyTorch, does not use Macs for their day-to-day ML work (they focus on GPUs and CPUs), so the MPS backend is sadly incomplete and has errors like the one you see here.
sampton|4 months ago
almostgotcaught|4 months ago
EDIT: for the downvoters - i'll repeat, this is not a correct assessment of the relationship between Apple and PyTorch. but you can keep downvoting if you want <shrug>
cadamsdotcom|4 months ago
Even identical classes could help future folks know copying back is platform specific: “hm, we wrote to an OutputPlaceholder but didn’t read back from it, that seems wrong”.
ramses0|4 months ago
hobom|4 months ago
hinkley|4 months ago
The landing page in our app used jqueryUI’s drag and drop support, back around the time they declared bankruptcy on the confusing buggy code and wouldn’t even accept bug fixes because they were replacing it component by component (which was taking almost 3x as long as predicted). We had columns you could drag items between but they had a max height and scroll bars and it turned out jqueryUI would let you drag items into different rows if the overflow area for adjacent drag targets overlapped your row.
The person who found it couldn’t fix it. The other fixer couldn’t fix it. I diagnosed it but the spaghetti code was a recursive mess and I could not find a spot where I could fix it. Especially given I couldn’t send in a patch to them.
So I spent half of my free time the last day of every (2 week) sprint for almost six months before I finally found a small function I could monkey patch to wrap it in a short circuit check for clipping region. I spent maybe 20,30 hours on this, a lot of it just getting back to the same situation to debug. But it felt like it took forever to fix it.
The short circuit also made drag and drop faster, which was just getting in the edge of distracting. Particularly on a crowded page.
CaptainOfCoit|4 months ago
Also remembering when Firebug for Firefox appeared, and made so many things so much easier. Suddenly things that took hours took days, and it was so much easier when you had some introspection tools.
EdwardDiego|4 months ago
brilee|4 months ago
I think if you want to give your reader a quick intro to, e.g., what is the Adam optimizer, a simple link to Wikipedia is fine. No need to copy-paste an AI tutorial on Adam into the blog post.
CaptainOfCoit|4 months ago
reilly3000|4 months ago
airza|4 months ago
Rileyen|4 months ago
What’s the trickiest bug you’ve ever run into?
dcl|4 months ago
gugagore|4 months ago
tavianator|4 months ago
elanapearl|4 months ago
I originally wrote "vanilla" there but didn't want to repeat that word twice in a row so swapped it for "standard" without realizing it now looked like the SGD acronym
just fixed that to avoid confusion- thanks for pointing it out!
nraynaud|4 months ago
matusp|4 months ago
anal_reactor|4 months ago
kccqzy|4 months ago
mirekrusin|4 months ago
dataflow|4 months ago
cryber|4 months ago
saagarjha|4 months ago
modeless|4 months ago
hershyb_|4 months ago