top | item 39539728

(no title)

ihattendorf | 2 years ago

That sounds trivial enough that the compiler would remove the bounds checks, assuming I'm understanding correctly that you have a condition that validates the number of fields at some point before an invalid access would occur.

But if it's possible for someone to muck with the file contents and lie about the number of fields which would cause a bounds error, that's exactly what bounds checking is supposed to avoid. So either bounds checks will be removed, or they're necessary.

discuss

order

bayindirh|2 years ago

I think it won't be able to because the creation of these data structures and consuming them is 3 files apart.

> But if it's possible for someone to muck with the file contents and lie about the number of fields.

You can't. You can say you'll have 7, but provide 8. But as soon as I encounter the 8th one during parsing, everything aborts. Same for saying 7 and providing 6. If the file ends after parsing 6th one, I say there's an error in your file and abort. Everything has to checkout and have to be sane to be able to start. Otherwise you'll get file format errors all day.

The rest of the pipeline is unattended completely. It's bona fide number crunching (material simulation to be exact), so speed is of the essence. Talking about >1.5 million iterations per second per core.

aw1621107|2 years ago

> I think it won't be able to because the creation of these data structures and consuming them is 3 files apart.

Strictly speaking I don't think the distance between creation and consumption matters. It all comes down to what the compiler is able to prove at the site where the bounds check may go.

For example, if you're iterating over a Vec using `for i in 0..vec.len() { ... }` then the amount of code between the creation and consumption of that Vec doesn't matter, as the compiler has all the information it needs to eliminate the bounds check right there.