top | item 13282841

(no title)

andikleen2 | 9 years ago

This would be only useful for "good" page faults that fault something in, but not for "bad" ones (like NULL pointer). If a bad page fault was executed it would allow transactions to crash the program, which wouldn't be very atomic.

The transaction mechanism doesn't know in advance if it's a good or a bad page fault.

You would need to tell the operating system kernel that the page fault happened in a transaction, and let it ignore it if it was a bad page fault. That would be much more complicated than current TSX.

Also there are other cases were retries will not succeed, page fault was just an example. Another common case is the dynamic linker when a library function is first executed.

discuss

order

loeg|9 years ago

> If a bad page fault was executed it would allow transactions to crash the program, which wouldn't be very atomic.

It would allow bad page faults to crash the program, i.e., ordinary behavior. No? Why do programs need this protection for HTM transactions?

> You would need to tell the operating system kernel that the page fault happened in a transaction, and let it ignore it if it was a bad page fault.

It wouldn't ignore it. It would fault the thread and probably tear down the process, as usual. No?

> Also there are other cases were retries will not succeed, page fault was just an example. Another common case is the dynamic linker when a library function is first executed.

That would be an abort due to excessive memory use?

Thanks! I'm not as familiar with this stuff as I would like to be.

bonzini|9 years ago

A bad page fault might arise just from reading a partially-updated data structure. For example you could write two locations in one thread and read them in another. If the read side assumes that "location 1 nonzero" implies "location 2 nonzero", and then dereferences location 2, an inconsistent read would cause such a bad page fault. The only correct way to handle this is to abort the transaction.