The math/rand package now automatically seeds the global random number
generator (used by top-level functions like Float64 and Int) with a random
value, and the top-level Seed function has been deprecated. Programs that
need a reproducible sequence of random numbers should prefer to allocate
their own random source, using rand.New(rand.NewSource(seed)).
We've had some truly nasty bugs from people who weren't familiar with the prior behavior of having a default seed of zero for the global random-number generator. This is going to save many people so much heartache.
> We've had some truly nasty bugs from people who weren't familiar with the prior behavior of having a default seed of zero for the global random-number generator. This is going to save many people so much heartache.
Agreed, but worth nothing that this does not make it cryptographically secure. The output can still be predicted. For cryptographic security, you need crypto/rand: https://pkg.go.dev/crypto/rand
In general, my advice is to use the cryptographically secure RNG unless you specifically know you need reproducibility (e.g. scientific simulation, or map generation in video games). With non-secure RNGs, it's very easy to accidentally expose yourself to problems without realizing it.
Hear me out, I think in its deprecated-but-not-removed state it is actually more dangerous.
Projects who have been seeding the random generator like they should suddenly think “oh I don’t need to do that anymore” and get rid of their manual seeding.
Then a compromised or rogue library decides to seed the global generator themselves to a hard coded value in an `init()`, thus meaning merely importing the library re-statics the seed.
It would look pretty innocuous and non-obvious in code AND be potentially pretty difficult to notice it happening in a lot of use cases. For bonus points/making it slightly harder to detect points they could even have a random set of seeds they use.
The right answer, probably just generally anyway, is to never use the global generator, and always create your own instance. Global state is a danger once again
I remember when I started programming this was one of the first quirks that really surprised me, I can just hear myself exclaiming "This is meant to be random! Why do I keep getting the same result?!"
How does the seed value get created randomly? I presume it doesn't use the same global random number generator, since that would still make it deterministic?
Hopefully they accept the slice version as well. Both of them contain very helpful functions that would be nice to have in the std library instead of rewriting them every where.
> Go 1.20 supports collecting code coverage profiles for programs (applications and integration tests), as opposed to just unit tests.
> To collect coverage data for a program, build it with go build's -cover flag,
This is absolutely awesome! I'm looking forward to trying this on some server binaries!
> The vet tool now reports use of the time format 2006-02-01 (yyyy-dd-mm) with Time.Format and time.Parse. This format does not appear in common date standards, but is frequently used by mistake when attempting to use the ISO 8601 date format (yyyy-mm-dd).
It's frequently used by mistake because Go doesn't allow datetime layouts to use the standard YYYY,MM,DD,HH,MM,etc which they ironically used for clarity in their release notes.
I don't understand why Go still forces datetime formats to be specified using "magic numbers" from some time in 2006.
Oh, wow, thanks for pointing that out. I have to build HashiCorp Vault from source with CGO enabled because of the broken DNS resolver. Otherwise, it's completely unusable with a split-tunnel VPN.
> Go 1.20 is the last release that will run on any release of Windows 7, 8, Server 2008 and Server 2012. Go 1.21 will require at least Windows 10 or Server 2016.
This is interesting. I wonder what Go 1.21 will depend on that requires at least Windows 10?
Nothing specifically AFAIK; it's just fewer platforms to test and support, making development overall easier. Microsoft ended extended support for Windows 7 in 2020, and special enterprise security updates this month. Windows 8 will end extended support July this year (before Go 1.21 is released); I can't find anything about any volume security updates; I think few people care, as Windows 8 is used less than Windows 7 today.
I wouldn't be surprised if it's more a "we're not going to bother to keep hooking up new things or doing fixes in a way that works and is tested on old operating systems" than "there isn't a way to..." type thing. Some security stuff may break the mold on that though.
Nothing concrete as it seems. It means that new releases are no longer tested with the old versions of Windows on their builders, and if you open a bug report about a problem with an unsupported version of Windows, nobody will care.
If they wanted to, they can now use the `LOAD_LIBRARY_REQUIRE_SIGNED_TARGET` flag in LoadLibraryEx.
Aside from that, there are a broad swath of flags to LoadLibraryEx that are only supported on earlier platforms with a KB [1] from over a decade ago installed. My suspicion is that Go has decided that requiring a security KB (while good hygiene) isn't a supportable situation.
I would assume it's problems with Microsoft not maintaining those older windows sdk's in favor of their current monolithic windows sdk which only seems to target 10 and 11.
> The specification now defines that struct values are compared one field at a time, considering fields in the order they appear in the struct type definition, and stopping at the first mismatch.
This is interesting because in certain cases it can be a performance hit when comparing structs which have been declared in alignment order to save memory. Simple example:
type t struct {
a int64
b int32
}
t1 := t{a: 1, b: 2}
t2 := t{a: 1, b: 3}
same := t1 == t2
When comparing `t1` and `t2`, the runtime will first compare the `a` fields, which are the same, then the `b` fields, which will differ. Only after doing both comparisons will it figure out that they're different. But the `a` fields are int64, so it has to traverse these large data types before finally getting the answer.
Of course this is a trivial example, in real-world cases structs can have many more fields with much larger contents. The point is that the optimal ordering for alignment, and the optimal ordering for comparison, seem to be different.
That is just the language definition. It is fine for an implementation to actually compare both at the same time as long as within the language you can not observe this happend. If we cant tell the read to be b happenend with a or before a (hello spectre) then for the implementation it should be fine to have done the comparison.
This is more of a constraint if the struct contains a comparison that can panic. The panic must happen in order or not at all depending how the fields are listed.
type t struct {
a int64
b any
}
Should not panic on b if a values are already different.
It is great that Go1.20 improves compile times after generics were added in Go1.18 [1]!
Overall, I think adding generics to Go was a big mistake. It brings the following drawbacks:
- Slower compile times, even if generics aren't used. This slows down development pace in Go.
- The reduced code readability if generics are used. This slows down development pace in Go.
- The increased complexity of Go compiler. This slows down Go compiler development and increases chances for bugs.
- Very low adoption of generics in practice, since they aren't useful in most Go code bases. The generics are actively used in some freaky packages only after the year since they were released in Go1.18.
The only useful thing from Go generics is a syntactic sugar, which allows replacing `interface{}` with `any`.
You forgot to list the most useful feature of adding generics: people on the internet can no longer say "lol no generics", drastically reducing the amount of garbage comments about Go.
At first I was on the fence too. I'm not use it all the time, but when I need it, it works as expected and it is much less a hassle than I remember writing C++ templates.
I have not seen a lot of comments that complained about the slower compile times. In my own experience it didn't really had an impact. But I agree the compiler should not become slower over time, so I appreciate the effort of the Go team to bring the compiler speed back.
I don't think that code readability is so much impacted. The square brackets work well. I find the angle brackets from C++ harder to read and there is the problem that >> is a token and cannot be used for two closing template angle brackets.
The increased complexity of the compiler is an issue, but cannot be avoided if you want to support Generics. But they took the time to make it right and as I stated it works for me.
I don't think that there is low adaption. Using type parameters visibly in a public API, breaks the API, which is the reason there are not a lot of uses in the standard library and with popular packages now. But this will change when maps and slices will be integrated in the standard library, which provide completely new APIs. Yesterday I found a library writing and reading parquet files, which used it quite extensively. But since I simply checked what libraries existed to assess how well the file format is supported, I cannot say much whether the use of type parameters by the library is useful.
You acknowledge in the first sentence that compile times are back in line with Go 1.17 (i.e. pre generics), yet you claim that generics mean slower compile times.
I use generics all the time in my Go code, in particular with the exp/slices library and lo. I do not find it less readable. I think readability is subjective based on people's programming experience and familiarity with type systems.
I’m gonna be that guy, but do you have sources for any of this? That link shows that compiler performance is the same as before generics, for instance.
Are there more bugs in the compiler? Is readability reduced, and having an effect on pace? Especially if adoption is so low to begin with? Is adoption actually so low, or just rising?
I mean, here's the Go 1.18 release notes and what they have to say about the dev team's level of faith in the stability of their implementation: https://go.dev/doc/go1.18
Is it a surprise that the low uptake is there with a discouragement like that?
Really glad to see the ability to convert seamlessly from slices to arrays. Not that I use it often, but it seemed like such a 'natural thing' to convert between.
>Comparable types (such as ordinary interfaces) may now satisfy comparable constraints, even if the type arguments are not strictly comparable (comparison may panic at runtime). This makes it possible to instantiate a type parameter constrained by comparable (e.g., a type parameter for a user-defined generic map key) with a non-strictly comparable type argument such as an interface type, or a composite type containing an interface type.
Wait, isn't that the whole point of the constraint in the first place, to keep you from using it with things that aren't comparable? Wouldn't it make more sense to have the constraint be a requirement in the interface itself, so that you can't create an interface value from a type that isn't comparable?
math/rand: The original behavior never bothered me and actually motivated deterministic tests. File this under "won't be bothered to read the docs", I guess.
"The directory $GOROOT/pkg no longer stores pre-compiled package archives for the standard library: go install no longer writes them, the go build no longer checks for them, and the Go distribution no longer ships them. Instead, packages in the standard library are built as needed and cached in the build cache, just like packages outside GOROOT. This change reduces the size of the Go distribution and also avoids C toolchain skew for packages that use cgo."
[+] [-] pianoben|3 years ago|reply
[+] [-] chimeracoder|3 years ago|reply
Agreed, but worth nothing that this does not make it cryptographically secure. The output can still be predicted. For cryptographic security, you need crypto/rand: https://pkg.go.dev/crypto/rand
In general, my advice is to use the cryptographically secure RNG unless you specifically know you need reproducibility (e.g. scientific simulation, or map generation in video games). With non-secure RNGs, it's very easy to accidentally expose yourself to problems without realizing it.
(Also, FYI, the default seed was 1, not 0).
[+] [-] donatj|3 years ago|reply
Projects who have been seeding the random generator like they should suddenly think “oh I don’t need to do that anymore” and get rid of their manual seeding.
Then a compromised or rogue library decides to seed the global generator themselves to a hard coded value in an `init()`, thus meaning merely importing the library re-statics the seed.
It would look pretty innocuous and non-obvious in code AND be potentially pretty difficult to notice it happening in a lot of use cases. For bonus points/making it slightly harder to detect points they could even have a random set of seeds they use.
The right answer, probably just generally anyway, is to never use the global generator, and always create your own instance. Global state is a danger once again
[+] [-] remus|3 years ago|reply
I feel strangely vindicated seeing this change.
[+] [-] benatkin|3 years ago|reply
I wish I had found this little throwback (whether accidental or not) before it got fixed - though I definitely agree with the change.
F
[+] [-] gouggoug|3 years ago|reply
[+] [-] SpaghettiX|3 years ago|reply
[+] [-] donio|3 years ago|reply
The library contains generics based helper functions for working with maps.
https://github.com/golang/go/issues/57436#issuecomment-14125...
https://pkg.go.dev/golang.org/x/exp/maps
https://cs.opensource.google/go/x/exp/+/master:maps/maps.go
[+] [-] latchkey|3 years ago|reply
I've been using this library for ages now...
https://github.com/cornelk/hashmap
Always entertains me to see developers write all the lock/unlock code when they could just use that.
[+] [-] impulser_|3 years ago|reply
[+] [-] eatonphil|3 years ago|reply
[+] [-] jacksgt|3 years ago|reply
This is absolutely awesome! I'm looking forward to trying this on some server binaries!
[+] [-] niij|3 years ago|reply
> The vet tool now reports use of the time format 2006-02-01 (yyyy-dd-mm) with Time.Format and time.Parse. This format does not appear in common date standards, but is frequently used by mistake when attempting to use the ISO 8601 date format (yyyy-mm-dd).
It's frequently used by mistake because Go doesn't allow datetime layouts to use the standard YYYY,MM,DD,HH,MM,etc which they ironically used for clarity in their release notes.
I don't understand why Go still forces datetime formats to be specified using "magic numbers" from some time in 2006.
[+] [-] gepoch|3 years ago|reply
01/02 03:04:05PM '06 -0700
So that's why it's in 2006, since you asked.
https://pkg.go.dev/time#pkg-constants
[+] [-] ellisd|3 years ago|reply
[+] [-] SloopJon|3 years ago|reply
[+] [-] pbohun|3 years ago|reply
This is interesting. I wonder what Go 1.21 will depend on that requires at least Windows 10?
[+] [-] arp242|3 years ago|reply
https://github.com/golang/go/issues/57003
https://github.com/golang/go/issues/57004
[+] [-] zamadatix|3 years ago|reply
[+] [-] mseepgood|3 years ago|reply
Nothing concrete as it seems. It means that new releases are no longer tested with the old versions of Windows on their builders, and if you open a bug report about a problem with an unsupported version of Windows, nobody will care.
[+] [-] indrora|3 years ago|reply
Aside from that, there are a broad swath of flags to LoadLibraryEx that are only supported on earlier platforms with a KB [1] from over a decade ago installed. My suspicion is that Go has decided that requiring a security KB (while good hygiene) isn't a supportable situation.
[1] https://support.microsoft.com/en-us/topic/microsoft-security...
[+] [-] gabereiser|3 years ago|reply
[+] [-] yawaramin|3 years ago|reply
This is interesting because in certain cases it can be a performance hit when comparing structs which have been declared in alignment order to save memory. Simple example:
When comparing `t1` and `t2`, the runtime will first compare the `a` fields, which are the same, then the `b` fields, which will differ. Only after doing both comparisons will it figure out that they're different. But the `a` fields are int64, so it has to traverse these large data types before finally getting the answer.Of course this is a trivial example, in real-world cases structs can have many more fields with much larger contents. The point is that the optimal ordering for alignment, and the optimal ordering for comparison, seem to be different.
[+] [-] martisch|3 years ago|reply
This is more of a constraint if the struct contains a comparison that can panic. The panic must happen in order or not at all depending how the fields are listed.
Should not panic on b if a values are already different.[+] [-] skybrian|3 years ago|reply
[+] [-] valyala|3 years ago|reply
Overall, I think adding generics to Go was a big mistake. It brings the following drawbacks:
- Slower compile times, even if generics aren't used. This slows down development pace in Go.
- The reduced code readability if generics are used. This slows down development pace in Go.
- The increased complexity of Go compiler. This slows down Go compiler development and increases chances for bugs.
- Very low adoption of generics in practice, since they aren't useful in most Go code bases. The generics are actively used in some freaky packages only after the year since they were released in Go1.18.
The only useful thing from Go generics is a syntactic sugar, which allows replacing `interface{}` with `any`.
[1] https://go.dev/doc/go1.20#compiler
[+] [-] zeeboo|3 years ago|reply
[+] [-] kune|3 years ago|reply
I have not seen a lot of comments that complained about the slower compile times. In my own experience it didn't really had an impact. But I agree the compiler should not become slower over time, so I appreciate the effort of the Go team to bring the compiler speed back.
I don't think that code readability is so much impacted. The square brackets work well. I find the angle brackets from C++ harder to read and there is the problem that >> is a token and cannot be used for two closing template angle brackets.
The increased complexity of the compiler is an issue, but cannot be avoided if you want to support Generics. But they took the time to make it right and as I stated it works for me.
I don't think that there is low adaption. Using type parameters visibly in a public API, breaks the API, which is the reason there are not a lot of uses in the standard library and with popular packages now. But this will change when maps and slices will be integrated in the standard library, which provide completely new APIs. Yesterday I found a library writing and reading parquet files, which used it quite extensively. But since I simply checked what libraries existed to assess how well the file format is supported, I cannot say much whether the use of type parameters by the library is useful.
[+] [-] jen20|3 years ago|reply
Speak for yourself: I prefer generics to the mess if copy-pasta or generated stuff that one had to use before.
[+] [-] mseepgood|3 years ago|reply
[+] [-] smasher164|3 years ago|reply
[+] [-] 0x62c1b43e|3 years ago|reply
Are there more bugs in the compiler? Is readability reduced, and having an effect on pace? Especially if adoption is so low to begin with? Is adoption actually so low, or just rising?
[+] [-] Macha|3 years ago|reply
Is it a surprise that the low uptake is there with a discouragement like that?
[+] [-] zxcvbn4038|3 years ago|reply
[+] [-] 12345hn6789|3 years ago|reply
In fact, it can improve readability and maintainability in some cases instead of having multiple structs copy pasted everywhere.
[+] [-] iio7|3 years ago|reply
[+] [-] silisili|3 years ago|reply
[+] [-] tiffanyh|3 years ago|reply
Does anyone still actively use Usenet?
[+] [-] subarctic|3 years ago|reply
Wait, isn't that the whole point of the constraint in the first place, to keep you from using it with things that aren't comparable? Wouldn't it make more sense to have the constraint be a requirement in the interface itself, so that you can't create an interface value from a type that isn't comparable?
[+] [-] pxue|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] politician|3 years ago|reply
[+] [-] kurtoid|3 years ago|reply
[+] [-] mseepgood|3 years ago|reply
"The directory $GOROOT/pkg no longer stores pre-compiled package archives for the standard library: go install no longer writes them, the go build no longer checks for them, and the Go distribution no longer ships them. Instead, packages in the standard library are built as needed and cached in the build cache, just like packages outside GOROOT. This change reduces the size of the Go distribution and also avoids C toolchain skew for packages that use cgo."
[+] [-] philosopher1234|3 years ago|reply
[+] [-] latchkey|3 years ago|reply
https://tomaszs2.medium.com/%EF%B8%8F-go-1-20-released-its-l...
[+] [-] kardianos|3 years ago|reply
[+] [-] petercooper|3 years ago|reply
[+] [-] dang|3 years ago|reply
[+] [-] einpoklum|3 years ago|reply
https://www.youtube.com/watch?v=C1hdAakECUM