(no title)
MrResearcher | 6 months ago
Here's an excerpt from the close(2) syscall description:
RETURN VALUE close() returns zero on success. On error, -1 is returned, and errno is set to indicate the error.
ERRORS EBADF fd isn't a valid open file descriptor.
EINTR The close() call was interrupted by a signal; see signal(7).
EIO An I/O error occurred.
ENOSPC
EDQUOT On NFS, these errors are not normally reported against the first write which exceeds the available storage space, but instead against a subsequent
write(2), fsync(2), or close().
See NOTES for a discussion of why close() should not be retried after an error.
It obviously can fail due to a multitude of reasons.
AndyKelley|6 months ago
MrResearcher|6 months ago
And, the moment you start flushing correctly: if(flush(...)) { abort(); }, it becomes infallible from the program's point of view, and can be safely invoked in destructors.
File closure operations, on the other hand, do have legitimate reasons to fail. In one of my previous adventures, we were asking the operator to put the archival tape back, and then re-issuing the close() syscall, with the driver checking that the tape is inserted and passing the control to the mechanical arm for further positioning of the tape, all of that in the drivers running in the kernel space. The program actually had to retry close() syscalls, and kept asking the operator to handle the tape (there were multiple scenarios for the operator how to proceed).
jcalvinowens|6 months ago
How could one fix that though? It seems pretty unavoidable to me because write() is more or less asynchronous to actual disk I/O.
You could add finalize() which is distinct from close(), but IMHO that's even more confusing.