You all joke that this doesn’t happen in practice, but something like this literally just bit me and it took me a few too many minutes to figure out what was going on.
I use a bash script as my BROWSER which calls another bash script to launch or communicate with my browser that I run inside a container. The script that my BROWSER script calls has some debug output that it prints to stderr.
I use mutt as my email client and urlscan [0] to open URLs inside emails. Urlscan looks at my BROWSER environment variable and thus calls my script to open whatever URL I target. Some time recently, the urlscan author decided to improve the UX by hiding stderr so that it wouldn’t pollute the view, and so attempted to pipe it to `/dev/null`. I guess their original code to do this wasn’t quite correct and it ended up closing the child processes’ stderr.*
I generally use `set -e` (errexit) because I want my scripts to fail if any command fails (I consider that after an unhandled failure the script’s behavior is undefined, some other people disagree and say you should never use `set -e` outside of development, but I digress). My BROWSER scripts are no exception.
While my scripts handle non-zero returns for most things that can go wrong, I never considered that writing log messages to stdout or stderr might fail. But it did, which caused the script to die before it was able to launch my browser. For a few weeks I wasn’t able to use urlscan to open links. I was too lazy to figure out what was wrong, and when I did it took me a while because I looked into every possibility except this one.
Luckily this wasn’t a production app. But I know now it could just as feasibly happen in production, too.
I opened an issue[1] and it was fixed very quickly. I love open source!
*No disrespect to urlscan, it’s an awesome tool and bugs happen to all of us!
It sounds our sensibilities are similar regarding cli and tool usage. This is a side note, but as someone who used to use "Bash strict mode" in all my scripts, I'm now a bit bearish on `set -e`, mainly due to the subtle caveats. If you're interested, the link below has a nice (and long) list of potentially surprising errexit gotchas:
I’m disappointed. I expected some obscure edgecase (like “Main is usually a function…” [1]) but instead that’s about scope handling, contract design and responsibility shift.
“Hello world” method simply calls an API to a text interface. It uses simple call, to a simple interface that is expected to be ever present. I don’t find any bug there. It won’t work if such interface isn’t available, is blocked or doesn’t exist. It won’t work on my coffee grinder nor on my screwdriver. It won’t work on my Arduino because there is no text interface neither.
Of course, one could argue that user might expect you to handle that error. That’s all about contracts and expectation. How should I deal with that? Is the “Hello world” message such important that the highest escalated scenario should be painted on the sky? I can imagine an awkward social game where we throw each other obscure challenges and call it a bug.
It’s nitpicking that even such simple code might fail and I get it. It will also fail on OOM, faulty hardware or if number of the processes on the machine hit the limit. Maybe some joker replaced bindings and it went straight to 3D printer which is out of material? _My expectations_ were higher based on the title.
Now allow me to excuse myself, I need to write an e-mail to my keyboard manufacturer because it seems like it has a bug which prevents it from working when slightly covered in liquid coffee.
I also had higher expectations after reading the title and was disappointed when I realized it was about failure to handle all possible system call results. I thought it was gonna be a bug in the C standard library or something.
I still agree with the author though. This is a serious matter and it seems most of the time the vast amount of complexity that exists in seemingly simple functionality is ignored.
Hello world is not "simply" calling a text interface API. It is asking the operating system to write data somewhere. I/O is exactly where "simple" programs meet the real world where useful things happen and it's also where things often get ugly.
Here's all the stuff people need to think about in order to handle the many possible results of a single write system call on Linux:
long result = write(1, "Hello", sizeof("Hello") - 1);
switch (result) {
case -EAGAIN:
/* Occurs only if opened with O_NONBLOCK. */
break;
case -EWOULDBLOCK:
/* Occurs only if opened with O_NONBLOCK. */
break;
case -EBADF:
/* File descriptor is invalid or wasn't opened for writing. */
break;
case -EDQUOT:
/* User's disk quota reached. */
break;
case -EFAULT:
/* Buffer points outside accessible address space. */
break;
case -EFBIG:
/* Maximum file size reached. */
break;
case -EINTR:
/* Write interrupted by signal before writing. */
break;
case -EINVAL:
/* File descriptor unsuitable for writing. */
break;
case -EIO:
/* General output error. */
break;
case -ENOSPC:
/* No space available on device. */
break;
case -EPERM:
/* File seal prevented the file from being written. */
break;
case -EPIPE:
/* The pipe or socket being written to was closed. */
break;
}
Some of these are unlikely. Some of these are irrelevant. Some of these are very important. Virtually all of them seem to be routinely ignored, especially in text APIs.
I found it interesting. If you generalize a bit, the question is "Will a naively written stdio program handle IO errors?".
The fact that for several popular languages the answer is "no" is disappointing.
It’s not about handling the error, it’s about propagating unexpected error. Because most errors are that; unexpected.
Modern languages do this by default, using exceptions, or force you to check return values using Result<> or alike.
Even in C, when compiled through some more strict linter, this would fail because ignored return value should be prefixed with (void).
In either case I think the main takeaway from the article is that a language where even hello world has such pitfalls, isn’t suitable, given the many other better options today.
My initial take was the same as yours. However, I would be of the opinion that the program would definitely be better if it returned non-zero on failure, so the question for me is whether it rises to the level of "bug" or not. In retrospect I can't think of when I wouldn't consider a program silently failing to not be a bug (unless specifically designed to silently fail), so I've come round to agreement with the article.
stdio is program input, and a program's user should be informed about bad inputs. that said, usually, where hello world is usually demonstrated is far away from i/o so perhaps the negligence. but to argue that it's behaving correctly here is unnecessary.
Enjoyable read for sure, but i think the question whether ot not this constitutes a bug or not is open for interpretation.
IMHO, it doesn't.
hello.c is written in a way that makes it very clear that the program doesn't care about error conditions in any way shape or form; the return value of printf is ignored, the output isn't flushed, the return of flushing isn't checked, noone looks at errno; ...so anything happening that could go wrong will go unreported, unless its something the OS can see (segfault, permission, etc.)
If I expect a program to do something (eg. handle IO errors) that its code says very clearly that it doesn't, that's not the programs fault.
This is a strange definition of 'buggy' to me. Surely it shouldn't depend on anything to do with the source code, otherwise closed-source programs are all 'neither-buggy-nor-not-buggy' and that can't be the case...
So the question boils down to: Is hello world a program that is supposed to write hello world or is it a program that is supposed to (compile and) start? For me it's usually the latter.
To those perplexed by the behaviour of Java's Hello World, as Java is otherwise very careful with error handling, this is because System.out is a java.io.PrintStream, and that's its documented behaviour:[1]
> Unlike other output streams, a PrintStream never throws an IOException; instead, exceptional situations merely set an internal flag that can be tested via the checkError method.
So the correct Hello World would be:
System.out.println("Hello World!");
if (System.out.checkError()) throw new IOException();
While the behaviour of PrintStream cannot be changed (it goes back to Java 1.0, and I'm guessing that the intention was not to require handling exceptions when writing messages to the standard output), adding a method to obtain the underlying, unwrapped OutputStream might be an idea worth considering, as it would allow writing to the standard output just like to any file.
It's a behavioural side-effect of checked exceptions. Because IOException is a checked exception, throwing it for console output would cause a lot of pain for printf debugging.
Imo this is because the responsibility is not clearly defined and can be argued upon.
If my program writes to the standard output, but you choose to redirect the pipe to a different location, is it my program’s responsibility to check what happens to the bytes AFTER the pipe?
After all: my program did output everything as expected. The part which fucked up was not part of my program.
I can see why some projects decide to not handle this bug.
> but you choose to redirect the pipe to a different location
The output doesn't go into a pipe however, the output goes to /dev/full. Redirection happens before the process is started, so the program is, in fact, writing directly to a file descriptor that returns an error upon write.
In this scenario you didn’t write any bytes though. You made a call to write to standard out (your process’s file handle 1) and didn’t succeed, you didn’t handle the possible error condition, you just silently ignored it.
I think this is pretty cut and dried - the failure is inside your process’s address space and the programmer error is that you haven’t handled a reported error.
>> what happens to the bytes AFTER the pipe?
There isn’t a pipe involved here, when your process was created it’s stdout was connected to dev/full then your program began executing
I don’t personally agree with that judgement. While the failure condition is at the OS level it’s still affecting the function of the program an in unexpected way.
Plus the whole point of STDOUT is that it is a file. So it shouldn’t change the developers mental model if that file happens to be a pseudo TTY, a pipe or a traditional persistent file system object. This flexibility is one of the core principles of UNIX and it’s what makes the POSIX command line as flexible as it is.
I feel like this is misrepresenting the article's point which isn't literally "hello world is buggy if it returns success on failure" but more "you should do error-handling". In this very specific case, you can argue that it's irrelevant. But if your program writes to a log file, or writes to a data file that it later reads from, it had better include some error-handling.
The fact that there's redirection is a ... misdirection. The redirection is only used to proxy a real-life case that can happen even when no redirection is taking place.
I thought about this too a bit just now. But I think it's not the shell setting up stuff outside your process that then fails. Rather you already get handles to the "full" file system at process creation and then it's your problem. And traditionally, the behaviour you get from all the standard streams is very unpredictable, depending on where they point.
Your program has a bug because it can write nothing or just part and will always return zero exit code. Ie think about using your program as part of bash script where you often rely on process exit codes.
> is it my program’s responsibility to check what happens to the bytes AFTER the pipe?
No, but it's not "after". Rather, it's your responsibility to handle backpressure by ensuring the bytes were written to the pipe successfully in the first place.
This isn't just about the filesystem being full btw. If you imagine a command like ./foo.py | head -n 10, it only makes sense for the 'head' command to close the pipe when it's done, and foo.py should be able to detect this and stop printing any more output. (This is especially important if you consider that foo.py might produce infinite lines of output, like the 'yes' program.)
I would argue this is not necessarily even an error from a user standpoint, so the return code from food.py should still be zero in many cases—a pipe-is-closed error just means the consumer simply didn't want the rest of the output, which is fine [1], whereas an out-of-disk-space error is probably really an error. Handling these robustly is actually difficult though, because (a) you'd need to figure out why printf() failed (so that you can treat different failures differently—but it's painful), and (b) you need to make sure any side effects in the program flow up to the printf() are semantically correct "prefixes" of the overall side effect, meaning that you'd need to pay careful attention to where you printf(). (Practically speaking, this makes it difficult to even have side effects that respect this, but that's an inherent problem/limitation of the pipeline model...)
FWIW, I would be very curious if anyone has formalized all of these nuances of the pipeline model and come up with a robust & systematic way to handle them. It seems like a complicated problem to me. To give just one example of a problem that I'm thinking of: should stderr and stdout behave the same way with respect to "pipe is closed"? e.g. should the program terminate if either is closed, or if both are closed? The answer is probably "it depends", but on what exactly? What if they're redirected externally? What if they're redirected internally? Is there a pattern you can follow to get it right most of the time? There's a lot of room for analysis of the issues that can come up, especially when you throw buffering/threading/etc. into the mix...
[1] Or maybe it isn't. Maybe the output (say, some archive format like ZIP) has a footer that needs to be read first, and it would be corrupt otherwise. Or maybe that's fine anyway, because the consumer should already understand you're outputting a ZIP, and it's on them if they want partial output. As always, "it depends". But I think a premature stdout closure is usually best treated as not-an-error.
> If my program writes to the standard output, but you choose to redirect the pipe to a different location, is it my program’s responsibility to check what happens to the bytes AFTER the pipe?
The pipe is your standard output. Your very program is created with the pipe as its stdout.
> After all: my program did output everything as expected. The part which fucked up was not part of my program.
But you are wrong, your program did not output everything as expected, and it failed to report that information.
Well that's precisely the mindset of C/C++. You have to think by yourself about everything that can go wrong with your code. And, man, lots of things can go wrong.
I find more modern languages so much less exhausting to use to write correct code.
I find more modern languages so much less exhausting to use to write correct code.
Modern languages do catch more programmer errors than C/C++, but the more general point is that there are "edge cases" (redirecting to a file isn't an edge case) that developers need to consider that aren't magically caught, and understanding the language you use well enough so as not to write those bugs is important.
The more experience I get as a dev the more I've come to understand that building the functionality required in a feature is actually a very small part of the job. The "happy path" where things go right is often trivial to code. The complexity and effort lies in making sure things don't break when the code is used in a way I didn't anticipate. Essentially experience means anticipating more ways things can go wrong. This article is a good example of that.
Yes. First you learn to 'code stuff' and as your experience progresses in that language you merely learn more and more of the ways it can be wrong, and then you worry more, and have to overthink every little thing.
The hidden costs are enormous and to this day still not very well accounted for.
C/C++ basically do only exactly what you tell them and nothing more, which is why they're so much faster than other languages.
There's no garbage collection/reference counting/etc. going on in the background. Objects aren't going to be moved around unless you explicitly move them around (Enjoy your heap fragmentation!). In C, you don't even get exceptions.
Of course, this creates TONS of foot-guns. Buffer overflows, unchecked errors, memory leaks, etc. A modern language won't have these, except for memory leaks, but they're much less likely to happen in trivial to moderate complexity apps.
A modern language could automatically throw an exception if the string cannot be completely written to standard output.
But that has not necessarily helped. The program now has a surprising hidden behavior; it has a way of terminating with a failed status that is not immediately obvious.
If it is used in a script, that could bite someone.
In Unix, there is such an exception mechanism for disconnected pipes: the SIGPIPE error. That can be a nuisance and gets disabled in some programs.
A real world example of catching (some, but certainly not all) fflush(), ferror() etc. cases is what "git" does at the end of its execution, the first highlighted line is where it's returning from whatever function implements a built-in ("status", "pull", "log" etc. etc.): :https://github.com/git/git/blob/v2.35.0/git.c#L464-L483
Doing something similar would be a good addition to any non-trivial C program that emits output on stdout and stderr.
In practice I haven't really seen a reason to exhaustively check every write to stdout/stderr as long as standard IO is used, and fflush() etc. is checked.
A much more common pitfall is when dealing with file I/O and forgetting to check the return value of close(). In my experience it's the most common case where code that tries to get it wrong actually gets it wrong, I've even seen code that checked the return value of open(), write() and fsync(), but forgot about the return value of close() before that fsync(). A close() will fail e.g. if the disk is full.
I work as a sysadmin and only write the odd program/script (Python, Perl, Bash). In the past, I’ve run into the problem of not being able to write to a log file (disk full or insufficient permissions) so I now check for these situations when opening or writing to a file.
A while ago, I started learning C in my personal time and am curious about this issue. If `close()` fails, I’m guessing there’s not much else the program can do – other than print a message to inform the user (as in the highlighted git code). Also, I would have thought that calling `fsync()` on a file descriptor would also return an error status if the filesystem/block device is full.
I couldn't see GNU Hello mentioned in the article or comments so far. I wonder how it fares bug-wise.
The GNU Hello program produces a familiar, friendly greeting. Yes, this is another implementation of the classic program that prints “Hello, world!” when you run it.
However, unlike the minimal version often seen, GNU Hello processes its argument list to modify its behavior, supports greetings in many languages, and so on. The primary purpose of GNU Hello is to demonstrate how to write other programs that do these things; it serves as a model for GNU coding standards and GNU maintainer practices.
/* Even exiting has subtleties. On exit, if any writes failed, change
the exit status. The /dev/full device on GNU/Linux can be used for
testing; for instance, hello >/dev/full should exit unsuccessfully.
This is implemented in the Gnulib module "closeout". */
It's a fun take, but a hyperbole nonetheless. hello.c is supposed to be run from a terminal and write back to it: there's always space to write. It's not meant to be part of a shell script, so the error status is irrelevant.
It does show that we take such examples a bit too literally: our feeble minds don't consider what's missing, until it's too late. That's a didactic problem. It only matters to certain kinds of software, and when we teach many people to program, most of them won't go beyond a few small programs. But perhaps the "second programming course" should focus a bit less on OOP and introduce error handling.
It depends on whether you want your Hello World programs to reflect an actual program or just be an approximation.
I’d argue there is little benefit in the latter. Particularly these days where the Hello World of most imperative languages look vaguely similar. Maybe back when LISP, FORTRAN and ALGOL were common it was more useful showing a representation of the kind of syntax one should expect. But that isn’t the case any more.
Plus given the risk of bugs becoming production issues or, worse, security vulnerabilities and the ease and prevalence of which developers now copy and paste code, I think there is now a greater responsibility for examples to make fewer assumptions. Even if that example is just Hello World.
This is true, however if we modify the program to print a 4096-byte long string instead of just the "hello world" string, then it's not sufficient again. And of course, the number 4096 is system-dependent.
So to really do hello world in C right, in addition to fflush, you also need to check the return value from puts. I've never seen any C tutorial do that though.
Should "hello world" return error if it actually prints something but there was no person to read the output? Maybe the user was distracted and was not looking at the screen. Does a "hello world" program make sound if no one hears it?
Sounds like the program failed its objective, greeting the world. And thus imho shouldn't return 0.
The program’s objective was to output the information, not to ensure that a user read it. Redirecting to `/dev/null` is a valid and common use of a program warranting no warning, so is running a program, collecting its log, then ultimately discarding it having never looked at it (in fact it’s the norm of well-behaved and solid programs).
This raises an interesting question: is there any IO function that should return unit/void? Or equivalently are there any IO functions for which we can safely ignore the return value/ignore all exceptions?
It seems like every single IO thing I can think of can have a relevant error, regardless of whether it's file-system related, network, or anything else.
In C, and many other languages, the file stream error state is saved after each operation, so you can skip error checking on every output line and only do
I think you can certainly return void, and you can ignore any I/O exceptions up to the top layer of the stack, but then you have to decide whether the exception should result in an error code to the user or not. Some (like "out of disk space") are usually errors, whereas others (like "no more data" or "pipe is closed") may not be.
Since the article is being pedantic, here's another pedantic complaint: What if printf() can't write all of its output, but manages to write some of it? printf() returns the number of bytes written, but (and I'm sure someone will correct me if I'm wrong!) it doesn't guarantee to be atomic - it can't either write everything or nothing. Imagine a complicated printf() call with lots of parameters and long strings - some of it might get written, then the next write() that it does fails due to lack of space. What does printf() do then?
The article cites an example of writing a YAML file and the dangers of it being half-written. Well, you could imagine outputting a file all in one printf() with lots of %s's in the format string. Some get written, but not all. If printf() decides to return an error message, retrying the printf() later on (after deleting another file, say), will corrupt the data because you'll be duplicating some of the output. But if printf() just returned the number of bytes written, your program will silently miss the error.
So does 'Hello World\n' need to check that printf() succeeded, or does it actually need to go further and check that printf() returned 12? (or is it 13, for \r\n ?) I don't think there's any way to really safely use the function in real life.
> So does 'Hello World\n' need to check that printf() succeeded, or does it actually need to go further and check that printf() returned 12?
No. According to fprintf(1), when the call succeeds it returns the number of printed characters. If it fails (for example, if it could only print part of the string) then it returns a negative value.
The number of printed characters is useful to know how much space was used on the output file, not to check for success. Success is indicated by a non-negative return value.
> There's our "No space" error getting reported by the OS, but no matter, the program silently swallows it and returns 0, the code for success. That's a bug!
Bzzt, no. You can't say that without knowing what the program's requirements are.
Blindly "fixing" a program to indicate failure due to not being able to write to standard output could break something.
Maybe the output is just a diagnostic that's not important, but some other program will reacts to the failed status, causing an issue.
Also, if a program produces output with a well-defined syntax, then the termination status may be superfluous; the truncation of the output can be detected by virtue of that syntax being incomplete.
E.g. JSON hello world fragment:
puts("{\"hello\":\"world\"}");
return 0;
if something is picking up the output and parsing it as JSON, it can deduce from a failed parse that the program didn't complete, rather than going by termination status.
> if something is picking up the output and parsing it as JSON, it can deduce from a failed parse that the program didn't complete, rather than going by termination status.
This is bad advice. Consider output that might be truncated but can't be detected (mentioned in the article).
The exit status is the only reliable way to detect failures (unless you have a separate communication channel and send a final success message).
> Also, if a program produces output with a well-defined syntax, then the termination status may be superfluous; the truncation of the output can be detected by virtue of that syntax being incomplete.
The author covers this (or rather, the possibility that truncation can not be detected).
There is more nuance to this, which is that we cannot detect all modes of failure just because we have written to a file object, and successfully flushed and closed it.
In the case of file I/O, we do not know that the bits have actually gone to the storage device. A military-grade hello world has to perform a fsync. I think that also requires the right storage hardware to be entirely reliable.
If stdout happens to be a TCP socket, then all we know from a successful flush and close is that the data has gone into the network stack, not that the other side has received it. We need an end-to-end application level ack. (Even just a two-way orderly shutdown: after writing hello, half-close the socket. Then read from it until EOF. If the read fails, the connection was broken and it cannot be assumed that the hello had been received.)
This issue is just a facet of a more general problem: if the goal of the hello world program is to communicate its message to some destination, the only way to be sure is to obtain an acknowledgement from that destination: communication must be validated end-to-end, in other words. If you rely on any success signal of an intermediate agent, you don't have end-to-end validation of success.
The super-robust requirements for hello world therefore call for a protocol: something like this:
Now we can detect failures like that there is no user present at the console who is reading the message. Or that their monitor isn't working so the can't read the question.
We can now correctly detect this case of not being able to deliver hello, world, converting it to a failed status:
$ ./hello < /dev/null > /dev/null
We can still be lied to, but there is strong justification in regarding that as not our problem:
$ yes | ./hello > /dev/null
We cannot get away from requiring syntax, because the presence of a protocol gives rise to it; the destination has to be able to tell somehow when it has received all of the data, so it can acknowledge it.
A super reliable hello world also must not take data integrity for granted; the message should include some kind of checksum to reduce the likelihood of corrupt communication going undetected.
This is good. All the people questioning the spec need to realize is handling the error should be opt-out, not opt-in.
#[must_use] in Rust is the right idea: Rust doesn't automatically do anything --- there is no policy foisted upon the programmer --- but it will reliably force the programmer to do something about the error explicitly.
It would be more interesting if the post shows how to detect that error. (and how the other language examples look, at least on mobile I dont see them)
Even Golang de facto suffers from this. I don't think I can name a time I saw someone check the return value of fmt.Print or log.Print. Not checking the return value still seems the the "right" thing to do.
I do. But then I’m writing a shell (like Bash/Fish/etc but more DevOps focused) so if I don’t handle all types of errors then the entire UX falls apart.
I was expecting Free Pascal not to have the bug, as Pascal generally fails with a visible runtime error, as it does I/O checking by default. However, it seems not to do it when WriteLn goes to the standard output... (even if it is then piped to /dev/full). So the basic
begin
WriteLn('Hello World!');
end.
definitely has the bug, at least with the fpc implementation. On the other hand, explicitly trying to write to /dev/full from the Pascal source triggers a beautiful message:
Do not forget the notes on dup2(). It's about the automatic closing of newfd before it gets replaced. I've bumped into this situation several times, that is why I'm mentioning it.
SYNOPSIS
int dup2(int oldfd, int newfd);
NOTES:
If newfd was open, any errors that would have been reported at close(2) time are lost.
If this is of concern, then the correct approach is not to close newfd before calling dup2(),
because of the race condition described above.
Instead, code something like the following could be used:
/* Obtain a duplicate of 'newfd' that can subsequently
be used to check for close() errors; an EBADF error
means that 'newfd' was not open. */
tmpfd = dup(newfd);
if (tmpfd == -1 && errno != EBADF) {
/* Handle unexpected dup() error */
}
/* Atomically duplicate 'oldfd' on 'newfd' */
if (dup2(oldfd, newfd) == -1) {
/* Handle dup2() error */
}
/* Now check for close() errors on the file originally
referred to by 'newfd' */
if (tmpfd != -1) {
if (close(tmpfd) == -1) {
/* Handle errors from close */
}
}
I think it's clearly main() that "owns" that error, since it's the one that swallowed it. It would be impossible for the shell to own it since it's impossible for the shell to even detect it, given this program's buggy behavior.
I find the argument that the code obviously ignores the error so that's obviously the program's intent to be completely spurious. The code "obviously" intends to print the string, too, and yet in some cases, it doesn't actually do that. It's clearly a bug. I don't think it's particularly useful to harp on this bug in the most introductory program ever, but it's definitely a bug.
Definitely thought-provoking. A few responses here on HN disagree with calling this a bug, so maybe the user owns the error. This is all related to what kind of contract we have in mind when creating and using such a program.
If `puts` were to be used for debug messages, it might be right not to fail so as to not disturb the rest of the program. If the primary purpose is to greet the world, then we might expect it to signal the failure. But each creator or user might have their own expected behaviors.
If a user expects different behavior, then perhaps it is a feature request:
The question is how the behavior can be made more explicit. I think it's a reasonable default to make programs fail often and early. If some failure can be safely ignored, it can always be implemented as an (explicit) feature.
2. Node.js is not a language. JavaScript is a language, and Node.js is a JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.
Since macOS does not have /dev/full, I think what is actually happening here is your bash shell fails to create a file named "full" in "/dev" and so the bash shell exits with an error; this has nothing to do with node.js.
> 2. Node.js is not a language. JavaScript is a language,
This criticism is the wrong way around. All of the author's "languages" are actually language implementations like NodeJS. You can tell because he produced the results by running the code, rather than by reading a spec.
Julia (1.7) behaves pretty similar to the Python 2 one, a printed error related to closing the stream, and a 0 error code.
~ >>> julia -e 'print("Hello world")' > /dev/full
error in running finalizer: Base.SystemError(prefix="close", errnum=28, extrainfo=nothing)
#systemerror#69 at ./error.jl:174
systemerror##kw at ./error.jl:174
systemerror##kw at ./error.jl:174
#systemerror#68 at ./error.jl:173 [inlined]
systemerror at ./error.jl:173 [inlined]
close at ./iostream.jl:63
⋮
~ >>> echo $?
0
`errnum=28` apparently refers to the ENOSPC error: "No space left on device" as defined by POSIX.1; so the information is there, even if not in the most presentable form.
In the end, it states that the language C has the bug. But this is wrong. In C, there are no exceptions, i.e. all error checking has to be explicit. This is just the language. So when you now ignore the error, this is not a bug of the language but just a bug in your code. The only thing you could argue is that this is a bad language design.
Or maybe this about global stdout object. With buffering enabled (by default), printf will not throw any error. The fflush would do. But a final fflush would be done implicitly at the end. But this is again all well documented, so still, this is not really a bug but maybe just bad language design.
I'm not exactly sure what C++ code was used. If this was just the same C code, then the same thing applies. And iostream just behaves exactly as documented.
Still confused. It seems some people think there is nothing to fix, some think the programmer needs to act to prevent it, some think ANSI and other creators of the affected languages would need to act to prevent it.
If we accept the idea that the function (non-coding use of the word) of an language's indication of success should - indicate success (or its absence) - of a piece of code, then surely the creators of the languages should make it do just that. That's their job right no? What am I missing?
It's not necessarily an error to print less than you intended though. The consumer might have simply decided that they didn't need the rest of the input. Whether or not it's an error depends on why the write failed to occur. Usually out-of-space is an error, whereas pipe-is-closed/has-reached-EOF is not.
"Print Hello World and indicate if it succeed or not"
If the requirements were:
"Print Hello World, then return 0"
It's working as intended.
I'd even go so far as to say that print(); return 0; should always return 0, it would be weird for such a program to ever return anything other than 0 (where would that return come from?).
In your pseudocode, "Print Hello World" doesn't come with any caveats, like "unless there is an error, in which case silently don't print Hello World". If an error might occur, your description is incomplete if you don't describe the policy that should be taken.
Your second point might be fine, except that it doesn't describe the API that languages actually use to print. For sure, it's trivial to implement the policy you describe, but suggesting that everyone always needs that policy is rather limiting and makes light of the real bugs that failure to handle errors actually results in.
The requirement of a "Hello World" program is always, by nature, to print "Hello World".
If my program calls your Hello World program, it expects it to print Hello World. That's basically the point of the program.
If your program don't print Hello World for whatever reason, of course you don't need to manage the error if it wasn't specified. But it's probably a bad thing (call it a bug or not) to exit 0 which the caller will interpret by "Hello World have just been printed successfully", I can go on and print ", John".
I agree it's probably not going to be in the requirements, and world will probably not collapse if you don't manage the error, but it's with no doubt an idiom required by most OSes to ensure programs are normally working.
You can also create orphan processes if it's needed by your requirements, but it's probably a bug or a hole in your requirements. Because at some point, non idiomatic programs will be used in situations where they will be creating issues. And we are talking about issues that are very hard to even spot.
Those "non requirements" are exactly how you lately discover that you have no logs from the last two weeks or that your backups aren't complete.
It's not requirement, but it's just hygiene.
tbf, I'm arguing of what should be an idea world, but I probably have myself written those sorts of bugs. Writing idiomatic code is hard and no one is to blame for not doing it perfectly. I just think it's some ideal to aim for.
The interesting case for me is if the requirements were "print Hello World". I'd argue that the one with an explicit return value is incorrect in that case, because the extra line of code leads you to believe an extra requirement exists which is to indicate success.
On the other hand, it's kind of depressing that I can't even write to stdout without needing to check for errors. And what are you going to do if that fails? Write to stderr? What if that fails because the program was run with `2>&1` ?
I don’t think this is a bug. The program writes to stdout which is guaranteed by *ix to be there. If it is full, it would block until the kernel serviced it. In your examples, the user asked the shell to redirect to a full file and it reported the error.
puts() returns 13 with no pipe and with pipe to /dev/full, which I just learned is due to buffering.
What worked for me initially was the POSIX write() function:
#include <stdlib.h>
#include <unistd.h>
int main(void)
{
int status;
status = write(1, "Hello World!\n", 13);
if (status < 0) { return EXIT_FAILURE; }
return EXIT_SUCCESS;
}
-----
As someone else commented, fflush() gives the desired error response.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int status;
puts("Hello World!");
status = fflush(stdout);
if (status < 0) { return EXIT_FAILURE; }
return EXIT_SUCCESS;
}
-----
andreyv probably has the best alternative[1], which is checking fflush() and ferror() at the program's end and calling perror(). It's better because it outputs an actual error message on the current terminal, and you don't need to write a special error checking wrapper.
puts(), like printf() and all the C-standardised "stdio" functions use buffered writes. So that is also buggy, because the buffer won't be flushed until after main() returns. You need to call and check the return value of "fflush(stdout)" manually to get the correct result.
printf returns the “number of characters transmitted to the output stream or negative value if an output error or an encoding error (for string and character conversion specifiers) occurred”, so it won’t ever return zero for that call (https://en.cppreference.com/w/c/io/fprintf)
I also think this optimally should do something like
int x = printf("Hello world!\n");
if(x<0) return x; // maybe fflush here, too, ignoring errors?
return fflush(stdout);
Logging an error message to stderr should be considered, too. I would ignore any errors from that, but attempting to syslog those or to write them to the console could be a better choice.
The author missed another bug, which many others do. You need a comma after Hello, as in "Hello, World!" because it's a direct address. Very, and I mean VERY few books get this right.
HellsMaddy|4 years ago
I use a bash script as my BROWSER which calls another bash script to launch or communicate with my browser that I run inside a container. The script that my BROWSER script calls has some debug output that it prints to stderr.
I use mutt as my email client and urlscan [0] to open URLs inside emails. Urlscan looks at my BROWSER environment variable and thus calls my script to open whatever URL I target. Some time recently, the urlscan author decided to improve the UX by hiding stderr so that it wouldn’t pollute the view, and so attempted to pipe it to `/dev/null`. I guess their original code to do this wasn’t quite correct and it ended up closing the child processes’ stderr.*
I generally use `set -e` (errexit) because I want my scripts to fail if any command fails (I consider that after an unhandled failure the script’s behavior is undefined, some other people disagree and say you should never use `set -e` outside of development, but I digress). My BROWSER scripts are no exception.
While my scripts handle non-zero returns for most things that can go wrong, I never considered that writing log messages to stdout or stderr might fail. But it did, which caused the script to die before it was able to launch my browser. For a few weeks I wasn’t able to use urlscan to open links. I was too lazy to figure out what was wrong, and when I did it took me a while because I looked into every possibility except this one.
Luckily this wasn’t a production app. But I know now it could just as feasibly happen in production, too.
I opened an issue[1] and it was fixed very quickly. I love open source!
*No disrespect to urlscan, it’s an awesome tool and bugs happen to all of us!
[0]: https://github.com/firecat53/urlscan
[1]: https://github.com/firecat53/urlscan/issues/122
underdeserver|4 years ago
I'm not sure return codes are the source of your troubles...
xelxebar|4 years ago
It sounds our sensibilities are similar regarding cli and tool usage. This is a side note, but as someone who used to use "Bash strict mode" in all my scripts, I'm now a bit bearish on `set -e`, mainly due to the subtle caveats. If you're interested, the link below has a nice (and long) list of potentially surprising errexit gotchas:
https://mywiki.wooledge.org/BashFAQ/105
(The list begins below the anecdote.)
shikoba|4 years ago
I'm really interested. What are their arguments? And how do they handle errors?
necrotic_comp|4 years ago
xlii|4 years ago
“Hello world” method simply calls an API to a text interface. It uses simple call, to a simple interface that is expected to be ever present. I don’t find any bug there. It won’t work if such interface isn’t available, is blocked or doesn’t exist. It won’t work on my coffee grinder nor on my screwdriver. It won’t work on my Arduino because there is no text interface neither.
Of course, one could argue that user might expect you to handle that error. That’s all about contracts and expectation. How should I deal with that? Is the “Hello world” message such important that the highest escalated scenario should be painted on the sky? I can imagine an awkward social game where we throw each other obscure challenges and call it a bug.
It’s nitpicking that even such simple code might fail and I get it. It will also fail on OOM, faulty hardware or if number of the processes on the machine hit the limit. Maybe some joker replaced bindings and it went straight to 3D printer which is out of material? _My expectations_ were higher based on the title.
Now allow me to excuse myself, I need to write an e-mail to my keyboard manufacturer because it seems like it has a bug which prevents it from working when slightly covered in liquid coffee.
[1]: http://jroweboy.github.io/c/asm/2015/01/26/when-is-main-not-...
matheusmoreira|4 years ago
I still agree with the author though. This is a serious matter and it seems most of the time the vast amount of complexity that exists in seemingly simple functionality is ignored.
Hello world is not "simply" calling a text interface API. It is asking the operating system to write data somewhere. I/O is exactly where "simple" programs meet the real world where useful things happen and it's also where things often get ugly.
Here's all the stuff people need to think about in order to handle the many possible results of a single write system call on Linux:
Some of these are unlikely. Some of these are irrelevant. Some of these are very important. Virtually all of them seem to be routinely ignored, especially in text APIs.boloust|4 years ago
pdw|4 years ago
Too|4 years ago
Modern languages do this by default, using exceptions, or force you to check return values using Result<> or alike.
Even in C, when compiled through some more strict linter, this would fail because ignored return value should be prefixed with (void).
In either case I think the main takeaway from the article is that a language where even hello world has such pitfalls, isn’t suitable, given the many other better options today.
nebulous1|4 years ago
LadyCailin|4 years ago
unknown|4 years ago
[deleted]
yawboakye|4 years ago
usrbinbash|4 years ago
IMHO, it doesn't.
hello.c is written in a way that makes it very clear that the program doesn't care about error conditions in any way shape or form; the return value of printf is ignored, the output isn't flushed, the return of flushing isn't checked, noone looks at errno; ...so anything happening that could go wrong will go unreported, unless its something the OS can see (segfault, permission, etc.)
If I expect a program to do something (eg. handle IO errors) that its code says very clearly that it doesn't, that's not the programs fault.
stonemetal12|4 years ago
Is there no such thing as a bug then? The program does what the code says so every "misbehavior" and crash is expected behavior.
oneeyedpigeon|4 years ago
rplnt|4 years ago
l33t2328|4 years ago
layer8|4 years ago
pron|4 years ago
> Unlike other output streams, a PrintStream never throws an IOException; instead, exceptional situations merely set an internal flag that can be tested via the checkError method.
So the correct Hello World would be:
While the behaviour of PrintStream cannot be changed (it goes back to Java 1.0, and I'm guessing that the intention was not to require handling exceptions when writing messages to the standard output), adding a method to obtain the underlying, unwrapped OutputStream might be an idea worth considering, as it would allow writing to the standard output just like to any file.[1]: https://docs.oracle.com/en/java/javase/17/docs/api/java.base...
barrkel|4 years ago
mccorrinall|4 years ago
If my program writes to the standard output, but you choose to redirect the pipe to a different location, is it my program’s responsibility to check what happens to the bytes AFTER the pipe?
After all: my program did output everything as expected. The part which fucked up was not part of my program.
I can see why some projects decide to not handle this bug.
usrbinbash|4 years ago
The output doesn't go into a pipe however, the output goes to /dev/full. Redirection happens before the process is started, so the program is, in fact, writing directly to a file descriptor that returns an error upon write.
CraigJPerry|4 years ago
I think this is pretty cut and dried - the failure is inside your process’s address space and the programmer error is that you haven’t handled a reported error.
>> what happens to the bytes AFTER the pipe?
There isn’t a pipe involved here, when your process was created it’s stdout was connected to dev/full then your program began executing
hnlmorg|4 years ago
Plus the whole point of STDOUT is that it is a file. So it shouldn’t change the developers mental model if that file happens to be a pseudo TTY, a pipe or a traditional persistent file system object. This flexibility is one of the core principles of UNIX and it’s what makes the POSIX command line as flexible as it is.
oneeyedpigeon|4 years ago
The fact that there's redirection is a ... misdirection. The redirection is only used to proxy a real-life case that can happen even when no redirection is taking place.
ygra|4 years ago
mirekrusin|4 years ago
dataflow|4 years ago
No, but it's not "after". Rather, it's your responsibility to handle backpressure by ensuring the bytes were written to the pipe successfully in the first place.
This isn't just about the filesystem being full btw. If you imagine a command like ./foo.py | head -n 10, it only makes sense for the 'head' command to close the pipe when it's done, and foo.py should be able to detect this and stop printing any more output. (This is especially important if you consider that foo.py might produce infinite lines of output, like the 'yes' program.)
I would argue this is not necessarily even an error from a user standpoint, so the return code from food.py should still be zero in many cases—a pipe-is-closed error just means the consumer simply didn't want the rest of the output, which is fine [1], whereas an out-of-disk-space error is probably really an error. Handling these robustly is actually difficult though, because (a) you'd need to figure out why printf() failed (so that you can treat different failures differently—but it's painful), and (b) you need to make sure any side effects in the program flow up to the printf() are semantically correct "prefixes" of the overall side effect, meaning that you'd need to pay careful attention to where you printf(). (Practically speaking, this makes it difficult to even have side effects that respect this, but that's an inherent problem/limitation of the pipeline model...)
FWIW, I would be very curious if anyone has formalized all of these nuances of the pipeline model and come up with a robust & systematic way to handle them. It seems like a complicated problem to me. To give just one example of a problem that I'm thinking of: should stderr and stdout behave the same way with respect to "pipe is closed"? e.g. should the program terminate if either is closed, or if both are closed? The answer is probably "it depends", but on what exactly? What if they're redirected externally? What if they're redirected internally? Is there a pattern you can follow to get it right most of the time? There's a lot of room for analysis of the issues that can come up, especially when you throw buffering/threading/etc. into the mix...
[1] Or maybe it isn't. Maybe the output (say, some archive format like ZIP) has a footer that needs to be read first, and it would be corrupt otherwise. Or maybe that's fine anyway, because the consumer should already understand you're outputting a ZIP, and it's on them if they want partial output. As always, "it depends". But I think a premature stdout closure is usually best treated as not-an-error.
masklinn|4 years ago
The pipe is your standard output. Your very program is created with the pipe as its stdout.
> After all: my program did output everything as expected. The part which fucked up was not part of my program.
But you are wrong, your program did not output everything as expected, and it failed to report that information.
bestouff|4 years ago
I find more modern languages so much less exhausting to use to write correct code.
onion2k|4 years ago
Modern languages do catch more programmer errors than C/C++, but the more general point is that there are "edge cases" (redirecting to a file isn't an edge case) that developers need to consider that aren't magically caught, and understanding the language you use well enough so as not to write those bugs is important.
The more experience I get as a dev the more I've come to understand that building the functionality required in a feature is actually a very small part of the job. The "happy path" where things go right is often trivial to code. The complexity and effort lies in making sure things don't break when the code is used in a way I didn't anticipate. Essentially experience means anticipating more ways things can go wrong. This article is a good example of that.
jcelerier|4 years ago
jollybean|4 years ago
The hidden costs are enormous and to this day still not very well accounted for.
Sohcahtoa82|4 years ago
There's no garbage collection/reference counting/etc. going on in the background. Objects aren't going to be moved around unless you explicitly move them around (Enjoy your heap fragmentation!). In C, you don't even get exceptions.
Of course, this creates TONS of foot-guns. Buffer overflows, unchecked errors, memory leaks, etc. A modern language won't have these, except for memory leaks, but they're much less likely to happen in trivial to moderate complexity apps.
kazinator|4 years ago
A modern language could automatically throw an exception if the string cannot be completely written to standard output.
But that has not necessarily helped. The program now has a surprising hidden behavior; it has a way of terminating with a failed status that is not immediately obvious.
If it is used in a script, that could bite someone.
In Unix, there is such an exception mechanism for disconnected pipes: the SIGPIPE error. That can be a nuisance and gets disabled in some programs.
shikoba|4 years ago
em3rgent0rdr|4 years ago
hiccuphippo|4 years ago
[0] https://zig.news/kristoff/where-is-print-in-zig-57e9
abainbridge|4 years ago
avar|4 years ago
Doing something similar would be a good addition to any non-trivial C program that emits output on stdout and stderr.
In practice I haven't really seen a reason to exhaustively check every write to stdout/stderr as long as standard IO is used, and fflush() etc. is checked.
A much more common pitfall is when dealing with file I/O and forgetting to check the return value of close(). In my experience it's the most common case where code that tries to get it wrong actually gets it wrong, I've even seen code that checked the return value of open(), write() and fsync(), but forgot about the return value of close() before that fsync(). A close() will fail e.g. if the disk is full.
Anthony-G|4 years ago
A while ago, I started learning C in my personal time and am curious about this issue. If `close()` fails, I’m guessing there’s not much else the program can do – other than print a message to inform the user (as in the highlighted git code). Also, I would have thought that calling `fsync()` on a file descriptor would also return an error status if the filesystem/block device is full.
yesenadam|4 years ago
The GNU Hello program produces a familiar, friendly greeting. Yes, this is another implementation of the classic program that prints “Hello, world!” when you run it.
However, unlike the minimal version often seen, GNU Hello processes its argument list to modify its behavior, supports greetings in many languages, and so on. The primary purpose of GNU Hello is to demonstrate how to write other programs that do these things; it serves as a model for GNU coding standards and GNU maintainer practices.
https://www.gnu.org/software/hello/
mcbrit|4 years ago
https://git.savannah.gnu.org/cgit/hello.git/tree/src/hello.c
Here's the comment:
tgv|4 years ago
It does show that we take such examples a bit too literally: our feeble minds don't consider what's missing, until it's too late. That's a didactic problem. It only matters to certain kinds of software, and when we teach many people to program, most of them won't go beyond a few small programs. But perhaps the "second programming course" should focus a bit less on OOP and introduce error handling.
hnlmorg|4 years ago
I’d argue there is little benefit in the latter. Particularly these days where the Hello World of most imperative languages look vaguely similar. Maybe back when LISP, FORTRAN and ALGOL were common it was more useful showing a representation of the kind of syntax one should expect. But that isn’t the case any more.
Plus given the risk of bugs becoming production issues or, worse, security vulnerabilities and the ease and prevalence of which developers now copy and paste code, I think there is now a greater responsibility for examples to make fewer assumptions. Even if that example is just Hello World.
knorker|4 years ago
It's not, though.
This helloworld is not safe to use as part of something bigger. Like:
That will upload a partial file to prod, if there's any write error.> It's not meant to be part of a shell script
You don't know that. And brittle pieces like this is absolutely not an uncommon source of bugs.
unknown|4 years ago
[deleted]
Reventlov|4 years ago
sunfish|4 years ago
So to really do hello world in C right, in addition to fflush, you also need to check the return value from puts. I've never seen any C tutorial do that though.
shultays|4 years ago
Sounds like the program failed its objective, greeting the world. And thus imho shouldn't return 0.
masklinn|4 years ago
dwohnitmok|4 years ago
It seems like every single IO thing I can think of can have a relevant error, regardless of whether it's file-system related, network, or anything else.
andreyv|4 years ago
In GNU programs you can use atexit(close_stdout) to do this automatically.
dataflow|4 years ago
joosters|4 years ago
The article cites an example of writing a YAML file and the dangers of it being half-written. Well, you could imagine outputting a file all in one printf() with lots of %s's in the format string. Some get written, but not all. If printf() decides to return an error message, retrying the printf() later on (after deleting another file, say), will corrupt the data because you'll be duplicating some of the output. But if printf() just returned the number of bytes written, your program will silently miss the error.
So does 'Hello World\n' need to check that printf() succeeded, or does it actually need to go further and check that printf() returned 12? (or is it 13, for \r\n ?) I don't think there's any way to really safely use the function in real life.
shikoba|4 years ago
> a negative value if an output error occurred
So in your case that's an error and printf returns a negative value. But yes, how many bytes were written is a lost information.
enriquto|4 years ago
No. According to fprintf(1), when the call succeeds it returns the number of printed characters. If it fails (for example, if it could only print part of the string) then it returns a negative value.
The number of printed characters is useful to know how much space was used on the output file, not to check for success. Success is indicated by a non-negative return value.
kazinator|4 years ago
Bzzt, no. You can't say that without knowing what the program's requirements are.
Blindly "fixing" a program to indicate failure due to not being able to write to standard output could break something.
Maybe the output is just a diagnostic that's not important, but some other program will reacts to the failed status, causing an issue.
Also, if a program produces output with a well-defined syntax, then the termination status may be superfluous; the truncation of the output can be detected by virtue of that syntax being incomplete.
E.g. JSON hello world fragment:
if something is picking up the output and parsing it as JSON, it can deduce from a failed parse that the program didn't complete, rather than going by termination status.jjnoakes|4 years ago
This is bad advice. Consider output that might be truncated but can't be detected (mentioned in the article).
The exit status is the only reliable way to detect failures (unless you have a separate communication channel and send a final success message).
MauranKilom|4 years ago
The author covers this (or rather, the possibility that truncation can not be detected).
kazinator|4 years ago
In the case of file I/O, we do not know that the bits have actually gone to the storage device. A military-grade hello world has to perform a fsync. I think that also requires the right storage hardware to be entirely reliable.
If stdout happens to be a TCP socket, then all we know from a successful flush and close is that the data has gone into the network stack, not that the other side has received it. We need an end-to-end application level ack. (Even just a two-way orderly shutdown: after writing hello, half-close the socket. Then read from it until EOF. If the read fails, the connection was broken and it cannot be assumed that the hello had been received.)
This issue is just a facet of a more general problem: if the goal of the hello world program is to communicate its message to some destination, the only way to be sure is to obtain an acknowledgement from that destination: communication must be validated end-to-end, in other words. If you rely on any success signal of an intermediate agent, you don't have end-to-end validation of success.
The super-robust requirements for hello world therefore call for a protocol: something like this:
Now we can detect failures like that there is no user present at the console who is reading the message. Or that their monitor isn't working so the can't read the question.We can now correctly detect this case of not being able to deliver hello, world, converting it to a failed status:
We can still be lied to, but there is strong justification in regarding that as not our problem: We cannot get away from requiring syntax, because the presence of a protocol gives rise to it; the destination has to be able to tell somehow when it has received all of the data, so it can acknowledge it.A super reliable hello world also must not take data integrity for granted; the message should include some kind of checksum to reduce the likelihood of corrupt communication going undetected.
ComradePhil|4 years ago
The "program's requirements" can in theory be "to be buggy unusable piece of shit". But when we speak, we don't need to consider that use case.
Ericson2314|4 years ago
#[must_use] in Rust is the right idea: Rust doesn't automatically do anything --- there is no policy foisted upon the programmer --- but it will reliably force the programmer to do something about the error explicitly.
pixelbeat__|4 years ago
https://www.gnu.org/ghm/2011/paris/slides/jim-meyering-goodb...
andi999|4 years ago
moltenguardian|4 years ago
silisili|4 years ago
Most Go in the wild is doing way more than a typical *nix binary, so the use case differs.
If you want a resilient system, you don't die on print and log failures.
Thaxll|4 years ago
Checking the result of log and print is very tedious and not useful most of the time.
hnlmorg|4 years ago
Tobu|4 years ago
https://gobyexample.com/hello-world
ale42|4 years ago
pickledcods|4 years ago
bandrami|4 years ago
heleninboodler|4 years ago
I find the argument that the code obviously ignores the error so that's obviously the program's intent to be completely spurious. The code "obviously" intends to print the string, too, and yet in some cases, it doesn't actually do that. It's clearly a bug. I don't think it's particularly useful to harp on this bug in the most introductory program ever, but it's definitely a bug.
b-zee|4 years ago
If `puts` were to be used for debug messages, it might be right not to fail so as to not disturb the rest of the program. If the primary purpose is to greet the world, then we might expect it to signal the failure. But each creator or user might have their own expected behaviors.
If a user expects different behavior, then perhaps it is a feature request:
> There's no difference between a bug and a feature request from the user's perspective. (https://blog.codinghorror.com/thats-not-a-bug-its-a-feature-...)
The question is how the behavior can be made more explicit. I think it's a reasonable default to make programs fail often and early. If some failure can be safely ignored, it can always be implemented as an (explicit) feature.
paradite|4 years ago
1. Node.js result is out-dated. I run on Node.js v14.15.1 hello world code below on macOS and it reported exit code 1 correctly:
2. Node.js is not a language. JavaScript is a language, and Node.js is a JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.3. Missing JavaScript result in the table, which is the most popular language on GitHub: https://octoverse.github.com/#top-languages-over-the-years
0x0|4 years ago
dmurray|4 years ago
This criticism is the wrong way around. All of the author's "languages" are actually language implementations like NodeJS. You can tell because he produced the results by running the code, rather than by reading a spec.
ksbrooksjr|4 years ago
jwilk|4 years ago
sundarurfriend|4 years ago
albertzeyer|4 years ago
Or maybe this about global stdout object. With buffering enabled (by default), printf will not throw any error. The fflush would do. But a final fflush would be done implicitly at the end. But this is again all well documented, so still, this is not really a bug but maybe just bad language design.
I'm not exactly sure what C++ code was used. If this was just the same C code, then the same thing applies. And iostream just behaves exactly as documented.
andai|4 years ago
https://news.ycombinator.com/item?id=27504254
pretzelhands|4 years ago
oneeyedpigeon|4 years ago
Sporktacular|4 years ago
If we accept the idea that the function (non-coding use of the word) of an language's indication of success should - indicate success (or its absence) - of a piece of code, then surely the creators of the languages should make it do just that. That's their job right no? What am I missing?
enriquto|4 years ago
dataflow|4 years ago
jwilk|4 years ago
unknown|4 years ago
[deleted]
parker78|4 years ago
"Print Hello World and indicate if it succeed or not"
If the requirements were:
"Print Hello World, then return 0"
It's working as intended.
I'd even go so far as to say that print(); return 0; should always return 0, it would be weird for such a program to ever return anything other than 0 (where would that return come from?).
hgomersall|4 years ago
Your second point might be fine, except that it doesn't describe the API that languages actually use to print. For sure, it's trivial to implement the policy you describe, but suggesting that everyone always needs that policy is rather limiting and makes light of the real bugs that failure to handle errors actually results in.
pjerem|4 years ago
If my program calls your Hello World program, it expects it to print Hello World. That's basically the point of the program.
If your program don't print Hello World for whatever reason, of course you don't need to manage the error if it wasn't specified. But it's probably a bad thing (call it a bug or not) to exit 0 which the caller will interpret by "Hello World have just been printed successfully", I can go on and print ", John".
I agree it's probably not going to be in the requirements, and world will probably not collapse if you don't manage the error, but it's with no doubt an idiom required by most OSes to ensure programs are normally working.
You can also create orphan processes if it's needed by your requirements, but it's probably a bug or a hole in your requirements. Because at some point, non idiomatic programs will be used in situations where they will be creating issues. And we are talking about issues that are very hard to even spot.
Those "non requirements" are exactly how you lately discover that you have no logs from the last two weeks or that your backups aren't complete.
It's not requirement, but it's just hygiene.
tbf, I'm arguing of what should be an idea world, but I probably have myself written those sorts of bugs. Writing idiomatic code is hard and no one is to blame for not doing it perfectly. I just think it's some ideal to aim for.
dmurray|4 years ago
prewett|4 years ago
On the other hand, it's kind of depressing that I can't even write to stdout without needing to check for errors. And what are you going to do if that fails? Write to stderr? What if that fails because the program was run with `2>&1` ?
bor0|4 years ago
cyborgx7|4 years ago
Yes it is. And it specifies the OS as well.
PennRobotics|4 years ago
https://gist.github.com/koral--/12a6cdda22ffbd82f28ecc93e0b5...
oneeyedpigeon|4 years ago
hwinked|4 years ago
skolskoly|4 years ago
lifeisstillgood|4 years ago
Unit testing verifies it does what it is supposed to do ideally, and all other tests verify it can do it in non-ideal environments.
s_ariga|4 years ago
int main(void) { if(puts("Hello, World!")!=EOF) { return EXIT_SUCCESS; }else { return EXIT_FAILURE; } }
PennRobotics|4 years ago
What worked for me initially was the POSIX write() function:
-----As someone else commented, fflush() gives the desired error response.
-----andreyv probably has the best alternative[1], which is checking fflush() and ferror() at the program's end and calling perror(). It's better because it outputs an actual error message on the current terminal, and you don't need to write a special error checking wrapper.
[1] https://news.ycombinator.com/item?id=30611924
unwind|4 years ago
On my test system (Ubuntu 21.10 on x86_64) the puts() call never fails.
I switched to a raw write() and that successfully catches it, by returning -1 when output is redirected to /dev/full.
Quite interesting, actually.
Karellen|4 years ago
jwilk|4 years ago
(And silently returning non-zero would be bad anyway.)
unknown|4 years ago
[deleted]
asojfdowgh|4 years ago
I'm going to have to go back over all the print statements I've ever written now
unixbane|4 years ago
RcouF1uZ4gsC|4 years ago
Would any of the languages report an error?
Maybe they all have bugs.
kaycebasques|4 years ago
Too|4 years ago
fouronnes3|4 years ago
boloust|4 years ago
edejong|4 years ago
Someone|4 years ago
I also think this optimally should do something like
Logging an error message to stderr should be considered, too. I would ignore any errors from that, but attempting to syslog those or to write them to the console could be a better choice.darkerside|4 years ago
Could have used this knowledge in the past...
incanus77|4 years ago
sedatk|4 years ago
8n4vidtmkvmk|4 years ago
steerablesafe|4 years ago
edit: https://cigix.me/c17#7.21.7.9.p3
bradwood|4 years ago
10 PRINT "Hell world"
unknown|4 years ago
[deleted]
steerablesafe|4 years ago
thedatamonger|4 years ago
inopinatus|4 years ago
(also, printf is buffered, so close or flush your output)
hwinked|4 years ago
[deleted]
unixbane|4 years ago
[deleted]
karolist|4 years ago
0. https://www.grammar-monster.com/lessons/commas_with_vocative...
dylan604|4 years ago