> Part of the intended contract for error reporting in Go
> is that functions include relevant available context,
> including the operation being attempted (such as the
> function name and its arguments).
I know the Go folks don't like exceptions, but this is an example of them learning the hard way that about one useful thing they lost by deciding to not do exceptions.
Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.
> Today, there are newer attempts to learn from as well,
> including Dart, Midori, Rust, and Swift.
For what it's worth, we are making significant changes to Dart's generics story and type system in general [1]. We added generic methods, which probably should have been there the entire time.
Generics are still always covariant, which has some plusses but also some real minuses. It's not clear if the current behavior is sufficient.
Our ahead-of-time compilation story for generics is still not fully proven either. We don't do any specialization, so we may be sacrificing more performance than we'd like, though we don't have a lot of benchmark numbers yet to measure it. This also interacts with a lot of other language features in deep ways, like nullability, whether primitive types are objects, how lists are implemented, etc.
Having just spent the last two months writing Go code, exceptions are the thing I miss most (well, besides the ternary operator and map/reduce operations). Not only are errors painful to debug without stacktraces, but every single method call is followed by three lines of "if err != null {". I am amazed that folks tolerate the sheer amount of repetitive typing required by the language.
> I know the Go folks don't like exceptions, but this is an example of them learning the hard way that about one useful thing they lost by deciding to not do exceptions.
The first thing I do in a Go project is reimplement exceptions it seems. It isn't even so much the stack trace, but that the _cause_ can be chained on. Often times one error causes another and the error interface in Go is too weak to capture it.
Rob Pike has his blog post about errors being values, but it's basically useless because the standard library hardly uses the more advanced error types. Pretty much every library returns error instead of MoreAdvancedError, which means you are doomed to speaking the lower common denominator.
(for the record, promising in the documentation an error will always be some type is pretty weak).
There was a proposal at some point to integrate github.com/pkg/errors into the stdlib, since it's pretty much a drop-in replacement to the current errors package, with extras. One of them is that errors contain stack traces that can be printed out if needed. Pretty useful, and still not an exception.
For errors with stacks in golang today, you could try the Meep [1] library.
It's a library for More Expressive Error Patterns. You can declare error types like this:
type ErrFrobnozMalformed struct {
Frob *Frobnoz
meep.TraitTraceable
}
... and any type you compose with a meep trait like that gets superpowers, like automatically attached stacks.
I'm the author. I don't think it's perfect -- in particular you really can't avoid a certain amount of boilerplate :( -- but with meep you get stacks, and you get custom error types, and that's worth a lot to me.
Whether or not you use this code, the idea that might be a useful takeaway is the fact that the stack-capturing behaviors (and others) are a trait that you can "mix in". Whether a stack is appropriate depends on situation. Some errors are fairly regular (e.g. certain kinds of IO halt) and putting a stack on them is not useful (and is CPU-costly). This doesn't necessarily follow any sort of direct inheritance tree. (Java has started doing something similar with adding Even More parameters to exception constructor for e.g. 'capturestack=false'.) I think this is an important point: errors often should have stacks, but not always.
I'm still hugely looking forward to seeing what the Go authors do in the future to make errors smarter. Doing the right thing should be easy, and it's almost impossible to strap something this essential on with a library: special syntax and compiler support for informative errors is warranted.
I think Swift got almost everything right, except for not having a GC maybe. Especially everything related to exceptions: the combination of "throws", "defer" and "try"/"try!" gets rid of most of the classic problems with exceptions.
> Generics are still always covariant, which has some plusses but also some real minuses.
How is this even possible? Does the compiler just fail if you try to put a type parameter in contravariant position? Or does it allow it and then blow up at run time?
> Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.
I find exceptions really useful but also, I think that with exceptions people tend to loose very useful panic/error dichotomy. I saw projects without a single "panic" in the code. All those projects gravitated towards dumb error handling mechanisms aka "log all errors and continue".
Exceptions don't always give you useful stack traces in a concurrent situation, because the current stack may only reflect a goroutine that's processing data on behalf of another. The real execution context may involve many more.
Along with generics, they should probably also reconsider algebraic data types, such a enums with values. This is the best feature swift adds to the table hands on, and it seems to me as it's pretty orthogonal to the rest of the language ( although it carries a lot of other features with it, such as pattern matching).
They wrote that they considered it to be redundant with interface programming, but really i don't understand why. Interface is about behavior, not data. An int doesn't "behave" like one, it is one. And something that's either an int or an array of string, doesn't "behave" like anything you'd want to describe with an interface...
As an example, one should see how protobuf "one of" messages are dealt with in go : switch on arbitrary types followed by manual typecasting. That's just gross...
It's redundant because Go already has type-switch and type-assertions. Your comments about ints vs arrays vs behavior misses an important fact: An object of any type can be promoted to interface{} (aka "dynamic") and then may be "pattern matched" on via `x.(type)`. Sure, it's pretty crummy pattern matching to only be able to dispatch on a single tag, but there are some fundamental problems with traditional algebraic data types and pattern matching:
1) Abstract data types encourages closed systems. You may view this as a positive: It enables exhaustiveness checks. But I view it as a way to make your program more fragile. Go's type asserts let you convert to interface types, so you can add new interface implementers later and not have to go fix-up old type-assertions.
2) First-to-match pattern matching complects order with dispatch. Each match clause has an implicit dependency on _all_ of the clauses before it, since you can match Foo{x, 1} and if that fails match Foo{x, y} where you know now that y != 1. This is sometimes useful, but as your patterns grow larger, it's simpler to just match Foo{x, y} and then branch on y == 1. A series of type-asserts with if statements has a little bit of order dependency on it: interface clauses and named wrapper types are first-to-match, but type-switch on struct clauses are completely order independent because there can only be one concrete representation underlying an interface{}.
3) Relying on positional fields causes two classes of problems: A) it's harder to grow your system later, since every pattern match needs to mention the new field you added and B) you can't _not_ mention a field by name (or at least give it a placeholder name) at every use as well. This is the same issue as the Foo{x, y} vs Foo{X: x, Y: y} notation. It's considered good practice in Go to use the later, since it's more future proof, ie Foo may grow a Z field and it will be initialized to zero.
I think they thought of the standard OOP solution to algebraic data types. Rather than pattern match on the value and then do the work, you dispatch on the value through an interface and do the work. I don't know go so here's some pseudo code:
type Foo = { x : int }
type Bar = { y : string }
interface DoStuff { doStuff() -> void }
function Foo.doStuff() {
...do stuff with Foo.x
}
function Bar.doStuff() {
... do stuff with Bar.y
}
function main() {
value := getSomethingThatImplementsDoStuff()
value.doStuff()
}
Compare this to ML:
type DoStuff = Foo of int | Bar of string
let main =
let value = getSomethingThatReturnsDoStuff ()
match value with
| Foo of x -> ...do stuff with x
| Bar of y -> ...do stuff with y
I've always wanted to like Go, but every time I get ~1,500 lines in a project, I remember my pain points. I totally see why other people like the current version of Go, but as it stands, it's not an ideal match for my brain.
Dependency management is a big pain point for me. I'm really glad to see several of my pain points on the list for this year, including another look at generics.
Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?
Anyway, it pleases me to see that the Go team is thinking hard about this stuff. At the bare minimum, I'm going to be contributing code to other people's open source Go projects for the foreseeable future. :-)
> C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?
That's exactly my hope. When i switched from Rust back to Go, i had a sigh of relief, i was able to prototype quickly and easily and my cognitive load felt much lower.
Strangely enough, this felt very similar to switching from NodeJS to Golang. In Node, i was constantly worried about what is async or sync and the dynamic nature of it made my code feel like the wild wild west. Both Rust and NodeJS put a lot of mental burden on me, in different ways - Go was definitely a sweet spot in both correctness and ease of use, and i hope they achieve that with Generics as well.
NOTE: My Rust programs felt vastly more secure than in go, and i miss that - that part was less cognitive load in favor of Rust. The struggle was mainly at the design phase, and i just wanted to mock up some code and types & borrowing posed many refactoring issues. I hope in the future strong Rust tooling will make refactoring a breeze.
> Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?
I seriously doubt Go and the people who maintain it are going to do groundbreaking work in this area. It's and extremely developed area of language design (and still developing way ahead of where Go would ever go)
What "complexity" does simple parametric polymorphism (i.e. forall a) bring? The only thing I can think of is some extra syntax, which is far less complex than a codebase built upon a lack of parametricity. Hell, don't even allow user-defined parametric types and force everyone to stay with Go's parametric builtins but allow programmers to abstract over them. Seems like a no-brainer to me.
I think it is no surprise that one goal of Go is coding at large where large teams are involved. For single person projects that I think you are doing, many people want intellectually stimulating language where Go may fall short.
Posts like this really return me the confidence in the future of Go the language.
I very much wish Go to succeed, it's built on a few nice ideas, but where it currently is it has a number of usability impairments that stop me from wanting to work with it.
But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems. (And this is besides the "routine", incremental but very important improvements, such as GC or stdlib.)
> But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems.
What will inevitably happen is that Pike et al will argue that such things are merely problems because "you're doing it wrong" or "there's no way to do this without any tradeoffs of any kind" (generics), and ultimately very little will change.
> Not enough Go code adds context like os.Remove does. Too much code does only
Well, the error interface is { Error()string } and gophers were told to use errors as values, not errors as type because supposedly "exceptions are bad". By providing context you are just re-inventing your own mediocre exception system. Why use errors as value at first place if you need context? just put exceptions in Go, therefore people don't need to use a third party library to wrap errors in order to trace the execution context.
> Not enough Go code adds context like os.Remove does. Too much code does only
if err != nil {
return err
}
Is anyone else surprised that forcing programmers to do the tedious, repetitive, and boring work of being a manual exception handler overwhelmingly results in people doing the least amount of effort to make it work?
I feel like so many of the headaches of go could have been avoided had the developers spent any time whatsoever thinking about the programmers using it.
I think that the Go team worked very hard to make something that programmers actually like using. Yes, it is annoying to do a lot of `check if err is nil`, but at the same time, exception handling is something that can be esoteric, whilst it's trivial to see what your example does.
I also feel like there has been a lot of emphasis put on keeping the APIs consistent, which is something that a lot of developers will tell you makes PHP a nightmare sometimes.
Regarding error context: I'd advocate simple error-chaining using a linked list. If a function fails, it returns an error wrapping the underlying error as the cause, and so on up the stack. The top of the stack can inspect or print the error chain ("A failed because B failed because ..."), or pinpoint the error that was the root cause.
I would love for Go to include something like this:
> In the long-term, if we could statically eliminate the possibility of races, that would eliminate the need for most of the memory model. That may well be an impossible dream, but again I’d like to understand the solution space better.
Unless I'm mistaken, this is an impossible dream as long as shared memory exists. It's the core tradeoff that distinguishes the Erlang runtime from the Go runtime (there are others, but they all stem from this).
Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.
It's one of those details that changing it would essentially create a new language. You'd have code, packages and libraries that either worked that way or they wouldn't.
IMO, this is an area where Go gets into dangerous territory of trying to be all things to people. Be great at what you're good at which is the "good enough, fast enough, portable enough, concurrent enough, stable enough" solution for backend services in most standard web architecture.
If people need distributed, fault tolerant, isolated, race proof, immutable run times that aren't quite as top end fast and aren't ideal for giant in RAM data structures...there's already a well established solution there by the name of Erlang (and Elixir). They made the tradeoffs already so you don't have to reinvent them.
> Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.
The isolation is not physical, but logical. Implementations are free to use zero-copying and make everything just as efficient. Theoretically compiler could even optimize message passing overhead away for systems with shared memory in some cases. The opposite is also true, shared memory is also logical and there is a lot of room for a lot of clever things, like eliminating races.
Excellent read. Official package management looks more about 'when' than 'if' now. As someone in Java world who did not graduated to Maven/Gradle and stuck to ANT I hope it will be minimalistic and immediately useful to Go users.
> all the way up the call stack, discarding useful context that should be reported (like remove /tmp/nonexist: above).
It's simple. With exceptions, we got used to "errors" that are, by default, debugable. But Go got rid of default debugable errors, and programmers are lazy.
The problem isn't the methodology. Its the library support. I (for my employee) wrote a library a few years ago (~go 1.1) with "errs.New" and "errs.Append(err, ..work like fmt..)" that generates error that look like:
The go vet integration with go test looks interesting. I'm currently using github.com/surullabs/lint [1] to run vet and a few other lint tools as part of go test. It provides a nice increase in productivity for my dev cycle. Having vet + other lint tools integrated into my dev+test cycle has caught a number of bugs before they hit CI.
I would really like to see best practices in the documentation on how to include the right amount of error context, as mentioned in the article.
Also what to put and not put in context objects is really important to document as it could easily snowball into a catch-all construct and be totally misused after a while.
> Test results should be cached too: if none of the inputs to a test have changed, then usually there is no need to rerun the test. This will make it very cheap to run “all tests” when little or nothing has changed.
I'll be curious to see how this pans out, because it sounds like a very deep rabbit hole. Is there any precedent for this in other language toolchains? I've seen some mondo test suites in Java that could desperately use it.
Gradle does it, although only at the task level. In fact, Gradle does this for all tasks, not just running tests, so re-running a command is always as cheap as possible.
Well, with some caveats. Firstly, tasks have to be written to support this mechanism, and although the built-in tasks are, not all third-party ones are, and those will always be re-run. Secondly, if a test task fails, it will be re-run. That makes it easy to re-run tests which failed for extraneous reasons.
In your mondo case, to take advantage of this, you'd want to break your test suite up into multiple tasks. You already get a task per subproject, but you could easily define multiple tasks per subproject. I've often had separate tasks for unit tests, integration tests, and browser tests.
Make also does this if you describe tests as a rule to create a test report.
Well, for one thing, the Go compiler itself already only compiles things that have changed, at least as long as you are doing `go build -i` and such. Only test what you had to compile seems like a pretty easy thing to do.
Happy to see that being able to not use GOPATH is at last considered seriously! During years, Go people wanted to force people to work their way. We can still this state of mind in the associated bug report: https://github.com/golang/go/issues/17271.
> it would be nice to retroactively define that string is a named type (or type alias) for immutable []byte
Perhaps an array is better than a slice, so `immutable [...]byte` Also, the for-range loop would have to behave differently so I guess it's a version 2 change. And if semantics are changing anyway, I'd prefer a `mutable` keyword to an `immutable` one.
> Perhaps an array is better than a slice, so `immutable [...]byte`
Slice is the right choice, I think. Were they to choose array as the alias, the language would need to "bubble up" the "array length" type parameter to the string type, which would make strings of different byte lengths incompatible types. But hey, Go is already pretty clearly inspired by Pascal, so maybe we'll see that after all :-)
[+] [-] munificent|9 years ago|reply
Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.
For what it's worth, we are making significant changes to Dart's generics story and type system in general [1]. We added generic methods, which probably should have been there the entire time.Generics are still always covariant, which has some plusses but also some real minuses. It's not clear if the current behavior is sufficient.
Our ahead-of-time compilation story for generics is still not fully proven either. We don't do any specialization, so we may be sacrificing more performance than we'd like, though we don't have a lot of benchmark numbers yet to measure it. This also interacts with a lot of other language features in deep ways, like nullability, whether primitive types are objects, how lists are implemented, etc.
[1]: https://github.com/dart-lang/dev_compiler/blob/master/STRONG...
[+] [-] stickfigure|9 years ago|reply
[+] [-] morecoffee|9 years ago|reply
The first thing I do in a Go project is reimplement exceptions it seems. It isn't even so much the stack trace, but that the _cause_ can be chained on. Often times one error causes another and the error interface in Go is too weak to capture it.
Rob Pike has his blog post about errors being values, but it's basically useless because the standard library hardly uses the more advanced error types. Pretty much every library returns error instead of MoreAdvancedError, which means you are doomed to speaking the lower common denominator.
(for the record, promising in the documentation an error will always be some type is pretty weak).
[+] [-] saturn_vk|9 years ago|reply
[+] [-] heavenlyhash|9 years ago|reply
It's a library for More Expressive Error Patterns. You can declare error types like this:
... and any type you compose with a meep trait like that gets superpowers, like automatically attached stacks.I'm the author. I don't think it's perfect -- in particular you really can't avoid a certain amount of boilerplate :( -- but with meep you get stacks, and you get custom error types, and that's worth a lot to me.
Whether or not you use this code, the idea that might be a useful takeaway is the fact that the stack-capturing behaviors (and others) are a trait that you can "mix in". Whether a stack is appropriate depends on situation. Some errors are fairly regular (e.g. certain kinds of IO halt) and putting a stack on them is not useful (and is CPU-costly). This doesn't necessarily follow any sort of direct inheritance tree. (Java has started doing something similar with adding Even More parameters to exception constructor for e.g. 'capturestack=false'.) I think this is an important point: errors often should have stacks, but not always.
I'm still hugely looking forward to seeing what the Go authors do in the future to make errors smarter. Doing the right thing should be easy, and it's almost impossible to strap something this essential on with a library: special syntax and compiler support for informative errors is warranted.
---
[1]: https://godoc.org/github.com/polydawn/meep
[+] [-] spion|9 years ago|reply
[+] [-] whateveracct|9 years ago|reply
How is this even possible? Does the compiler just fail if you try to put a type parameter in contravariant position? Or does it allow it and then blow up at run time?
[+] [-] chaotic-good|9 years ago|reply
I find exceptions really useful but also, I think that with exceptions people tend to loose very useful panic/error dichotomy. I saw projects without a single "panic" in the code. All those projects gravitated towards dumb error handling mechanisms aka "log all errors and continue".
[+] [-] rogpeppe1|9 years ago|reply
[+] [-] bsaul|9 years ago|reply
They wrote that they considered it to be redundant with interface programming, but really i don't understand why. Interface is about behavior, not data. An int doesn't "behave" like one, it is one. And something that's either an int or an array of string, doesn't "behave" like anything you'd want to describe with an interface...
As an example, one should see how protobuf "one of" messages are dealt with in go : switch on arbitrary types followed by manual typecasting. That's just gross...
[+] [-] brandonbloom|9 years ago|reply
1) Abstract data types encourages closed systems. You may view this as a positive: It enables exhaustiveness checks. But I view it as a way to make your program more fragile. Go's type asserts let you convert to interface types, so you can add new interface implementers later and not have to go fix-up old type-assertions.
2) First-to-match pattern matching complects order with dispatch. Each match clause has an implicit dependency on _all_ of the clauses before it, since you can match Foo{x, 1} and if that fails match Foo{x, y} where you know now that y != 1. This is sometimes useful, but as your patterns grow larger, it's simpler to just match Foo{x, y} and then branch on y == 1. A series of type-asserts with if statements has a little bit of order dependency on it: interface clauses and named wrapper types are first-to-match, but type-switch on struct clauses are completely order independent because there can only be one concrete representation underlying an interface{}.
3) Relying on positional fields causes two classes of problems: A) it's harder to grow your system later, since every pattern match needs to mention the new field you added and B) you can't _not_ mention a field by name (or at least give it a placeholder name) at every use as well. This is the same issue as the Foo{x, y} vs Foo{X: x, Y: y} notation. It's considered good practice in Go to use the later, since it's more future proof, ie Foo may grow a Z field and it will be initialized to zero.
[+] [-] i_don_t_know|9 years ago|reply
[+] [-] xyzzy_plugh|9 years ago|reply
Switch on arbitrary types, followed by typecasting? That's the go way. No surprises. Explicit instead if implicit behavior.
[+] [-] stewbrew|9 years ago|reply
Sometimes one wishes people from the go time would have spent some time hacking ML-derived languages.
[+] [-] ekidd|9 years ago|reply
Dependency management is a big pain point for me. I'm really glad to see several of my pain points on the list for this year, including another look at generics.
Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?
Anyway, it pleases me to see that the Go team is thinking hard about this stuff. At the bare minimum, I'm going to be contributing code to other people's open source Go projects for the foreseeable future. :-)
[+] [-] notheguyouthink|9 years ago|reply
That's exactly my hope. When i switched from Rust back to Go, i had a sigh of relief, i was able to prototype quickly and easily and my cognitive load felt much lower.
Strangely enough, this felt very similar to switching from NodeJS to Golang. In Node, i was constantly worried about what is async or sync and the dynamic nature of it made my code feel like the wild wild west. Both Rust and NodeJS put a lot of mental burden on me, in different ways - Go was definitely a sweet spot in both correctness and ease of use, and i hope they achieve that with Generics as well.
NOTE: My Rust programs felt vastly more secure than in go, and i miss that - that part was less cognitive load in favor of Rust. The struggle was mainly at the design phase, and i just wanted to mock up some code and types & borrowing posed many refactoring issues. I hope in the future strong Rust tooling will make refactoring a breeze.
[+] [-] whateveracct|9 years ago|reply
I seriously doubt Go and the people who maintain it are going to do groundbreaking work in this area. It's and extremely developed area of language design (and still developing way ahead of where Go would ever go)
What "complexity" does simple parametric polymorphism (i.e. forall a) bring? The only thing I can think of is some extra syntax, which is far less complex than a codebase built upon a lack of parametricity. Hell, don't even allow user-defined parametric types and force everyone to stay with Go's parametric builtins but allow programmers to abstract over them. Seems like a no-brainer to me.
[+] [-] geodel|9 years ago|reply
[+] [-] nine_k|9 years ago|reply
I very much wish Go to succeed, it's built on a few nice ideas, but where it currently is it has a number of usability impairments that stop me from wanting to work with it.
But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems. (And this is besides the "routine", incremental but very important improvements, such as GC or stdlib.)
[+] [-] xienze|9 years ago|reply
What will inevitably happen is that Pike et al will argue that such things are merely problems because "you're doing it wrong" or "there's no way to do this without any tradeoffs of any kind" (generics), and ultimately very little will change.
[+] [-] throwaw199ay|9 years ago|reply
Well, the error interface is { Error()string } and gophers were told to use errors as values, not errors as type because supposedly "exceptions are bad". By providing context you are just re-inventing your own mediocre exception system. Why use errors as value at first place if you need context? just put exceptions in Go, therefore people don't need to use a third party library to wrap errors in order to trace the execution context.
[+] [-] dilap|9 years ago|reply
There's a whole lot of space between "include useful context in errors" and "exceptions".
(And FWIW, Go does have exceptions, it just calls them panics, and has a culture not using them for "known knowns" error conditions.)
[+] [-] stouset|9 years ago|reply
I feel like so many of the headaches of go could have been avoided had the developers spent any time whatsoever thinking about the programmers using it.
[+] [-] tombert|9 years ago|reply
I also feel like there has been a lot of emphasis put on keeping the APIs consistent, which is something that a lot of developers will tell you makes PHP a nightmare sometimes.
[+] [-] davekeck|9 years ago|reply
I would love for Go to include something like this:
[+] [-] bjacokes|9 years ago|reply
[+] [-] tptacek|9 years ago|reply
[+] [-] brightball|9 years ago|reply
Unless I'm mistaken, this is an impossible dream as long as shared memory exists. It's the core tradeoff that distinguishes the Erlang runtime from the Go runtime (there are others, but they all stem from this).
Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.
It's one of those details that changing it would essentially create a new language. You'd have code, packages and libraries that either worked that way or they wouldn't.
IMO, this is an area where Go gets into dangerous territory of trying to be all things to people. Be great at what you're good at which is the "good enough, fast enough, portable enough, concurrent enough, stable enough" solution for backend services in most standard web architecture.
If people need distributed, fault tolerant, isolated, race proof, immutable run times that aren't quite as top end fast and aren't ideal for giant in RAM data structures...there's already a well established solution there by the name of Erlang (and Elixir). They made the tradeoffs already so you don't have to reinvent them.
[+] [-] zzzcpan|9 years ago|reply
The isolation is not physical, but logical. Implementations are free to use zero-copying and make everything just as efficient. Theoretically compiler could even optimize message passing overhead away for systems with shared memory in some cases. The opposite is also true, shared memory is also logical and there is a lot of room for a lot of clever things, like eliminating races.
[+] [-] geodel|9 years ago|reply
[+] [-] Traubenfuchs|9 years ago|reply
[+] [-] sbov|9 years ago|reply
It's simple. With exceptions, we got used to "errors" that are, by default, debugable. But Go got rid of default debugable errors, and programmers are lazy.
[+] [-] voidlogic|9 years ago|reply
[+] [-] zyxzkz|9 years ago|reply
I know there probably won't be immediate fixes, but it gives me confidence in Go's future.
[+] [-] jimbokun|9 years ago|reply
I think that's true, but I do think its been said by a number of Go users and advocates, which is where the perception comes from.
[+] [-] kasey_junk|9 years ago|reply
The builtin generics are a mess. At this point I don't trust the go team to implement any more complicated generic system.
[+] [-] vendakka|9 years ago|reply
[1] https://github.com/surullabs/lint
Disclaimer: I'm the author of the above library.
[+] [-] maxekman|9 years ago|reply
Also what to put and not put in context objects is really important to document as it could easily snowball into a catch-all construct and be totally misused after a while.
[+] [-] kibwen|9 years ago|reply
I'll be curious to see how this pans out, because it sounds like a very deep rabbit hole. Is there any precedent for this in other language toolchains? I've seen some mondo test suites in Java that could desperately use it.
[+] [-] twic|9 years ago|reply
Well, with some caveats. Firstly, tasks have to be written to support this mechanism, and although the built-in tasks are, not all third-party ones are, and those will always be re-run. Secondly, if a test task fails, it will be re-run. That makes it easy to re-run tests which failed for extraneous reasons.
In your mondo case, to take advantage of this, you'd want to break your test suite up into multiple tasks. You already get a task per subproject, but you could easily define multiple tasks per subproject. I've often had separate tasks for unit tests, integration tests, and browser tests.
Make also does this if you describe tests as a rule to create a test report.
[+] [-] secure|9 years ago|reply
[+] [-] Vendan|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] vbernat|9 years ago|reply
[+] [-] vorg|9 years ago|reply
Perhaps an array is better than a slice, so `immutable [...]byte` Also, the for-range loop would have to behave differently so I guess it's a version 2 change. And if semantics are changing anyway, I'd prefer a `mutable` keyword to an `immutable` one.
[+] [-] tomjakubowski|9 years ago|reply
Slice is the right choice, I think. Were they to choose array as the alias, the language would need to "bubble up" the "array length" type parameter to the string type, which would make strings of different byte lengths incompatible types. But hey, Go is already pretty clearly inspired by Pascal, so maybe we'll see that after all :-)
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] unknown|9 years ago|reply
[deleted]