(no title)
ihattendorf | 2 years ago
But if it's possible for someone to muck with the file contents and lie about the number of fields which would cause a bounds error, that's exactly what bounds checking is supposed to avoid. So either bounds checks will be removed, or they're necessary.
bayindirh|2 years ago
> But if it's possible for someone to muck with the file contents and lie about the number of fields.
You can't. You can say you'll have 7, but provide 8. But as soon as I encounter the 8th one during parsing, everything aborts. Same for saying 7 and providing 6. If the file ends after parsing 6th one, I say there's an error in your file and abort. Everything has to checkout and have to be sane to be able to start. Otherwise you'll get file format errors all day.
The rest of the pipeline is unattended completely. It's bona fide number crunching (material simulation to be exact), so speed is of the essence. Talking about >1.5 million iterations per second per core.
aw1621107|2 years ago
Strictly speaking I don't think the distance between creation and consumption matters. It all comes down to what the compiler is able to prove at the site where the bounds check may go.
For example, if you're iterating over a Vec using `for i in 0..vec.len() { ... }` then the amount of code between the creation and consumption of that Vec doesn't matter, as the compiler has all the information it needs to eliminate the bounds check right there.