top | item 8630564

(no title)

joelgwebber | 11 years ago

Sorry, I wasn't entirely clear -- I was referring to the original basis for the complaint about verbose error handling. I'm intentionally separating the handling of runtime errors (e.g., memcache request fail) from validation errors (garbage data causes a map to have a nil entry) and unexpected errors (whoops, that shouldn't have been nil).

For runtime errors, as I've stated above, I prefer explicit handling close to the error site. I think this is the right tradeoff for server code, though I'm less certain for client code.

For validation errors, I prefer, when possible, to have a parser that either succeeds or fails as a whole, and whose output I can then rely upon to be properly constructed.

For unexpected nils, I do like the option-type/monad approach, though in most cases (e.g., when writing Go, Java, C++, or Javascript) I just let them NPE/segfault.

Note that only one of these cases -- runtime errors -- results in a (result, err) pair in Go. The other two are just about result checking.

discuss

order

twic|11 years ago

> For unexpected nils, I do like the option-type/monad approach, though in most cases (e.g., when writing Go, Java, C++, or Javascript) I just let them NPE/segfault.

.. segfault some unknown distance from the place they originated. I mostly do Java, with a substantial dash of JavaScript and Ruby, and in all of those languages, i waste time tracking nulls to their source. A language which blew up as soon as an unacceptable null arose would be huge win.

Actually, having moved from entirely Java to only mostly Java, one thing i've noticed is that in dynamic languages, wrongly-typed objects can propagate as freely as nulls. I spent an unforgivable fraction of today tracking down a bug in a JavaScript app where a framework was passing the wrong type of model to a view. In a strongly-typed language, that would have failed as soon as the framework did that (or perhaps even at compile time), but in JS, it failed in a cryptic way hundreds of statements later.

For extra comedy value, it turned out that the reason the framework was doing that was because i had passed a null into one of its configuration properties - because i'd innocently written something like:

  Initech.Commerce.CartView = Backbone.Marionette.CompositeView.extend({
    childView: Initech.Commerce.ItemView
  });
And since the views are defined in files cartView.js and itemView.js, and since the files are loaded in alphabetical order (because we're just throwing them out of Rails and not using RequireJS or similar, i know, i know), at the point at which CartView is defined, Initech.Commerce.ItemView is null!

Basically, JavaScript is a language only a Dwarf Fortress fan could love. Whereas i see Go as more suitable for Minecraft fans.

Dewie|11 years ago

> I'm intentionally separating the handling of runtime errors (e.g., memcache request fail) from validation errors (garbage data causes a map to have a nil entry) and unexpected errors (whoops, that shouldn't have been nil).

I think that the original poster (the one you responded to originally) was talking about errors in the first two senses; things that you should/want to handle yourself. I guess the last thing should be handled with a panic?

> For unexpected nils, I do like the option-type/monad approach, though in most cases (e.g., when writing Go, Java, C++, or Javascript) I just let them NPE/segfault.

It doesn't that you understand idiomatic uses of option-type. Their used for things that legitimately, as a part of the normal operation of the program, can be "null". Not for things that should really not be null. At least I assume that uses of NPE/segfault is not typically used for things that might be null (like returning null from a Map since the value is not there). So, they are not used to "hide" null pointers that shouldn't be null to begin with.

For cases where a nullable type, or Option[T] if you will, really should not be null, it would be more idiomatic to "forcefully" extract the value. In other words, use a function that returns the value, or throw an exception if it really isn't a value (it is null); throwing an exception here would be an indication of a bug. But this use is usually thought of as unidiomatic in languages like Haskell: you should rather make sure that you don't have to "forcefully extract" things like that to begin with.

joelgwebber|11 years ago

> I think that the original poster (the one you responded to originally) was talking about errors in the first two senses; things that you should/want to handle yourself. I guess the last thing should be handled with a panic?

Right -- I'm only saying that I want to handle the first runtime errors explicitly at the call-site. This is where I explicitly like Go's error style. Other approaches to this problem (e.g., pattern matching) have been discussed elsewhere on this thread, and that's fine for languages that want to go down this route, but the extra language complexity is a tradeoff. I can see coming down on either side of said tradeoff, but it's not cut-and-dried.

> It doesn't that you understand idiomatic uses of option-type [...]

Sorry, that came out wrong. Where the option-type/elvis-operator approach comes up, it seems, is when you need to dig a few levels deep through possibly-nil references, as in cromwellian's .getOrElse(foo).getOrElse(bar) example. I certainly understand how that can be useful, but in my experience it seems to come up most often when dealing with unvalidated inputs (I'm sure there are other cases I'm not thinking of; this is just my personal experience). Whenever possible, I tend to prefer having the inputs parsed, validated, and either accepted or rejected, by a validating parser of some kind. Then I really can just assume it won't segfault as I read through these chained methods/fields.