> In general everything about it feels like it makes projects easy to work on for 5 days, abandon for 2 years, and then get back into writing code without a lot of problems.
To me this is one of the most underrated qualities of go code.
Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.
I've picked up some Go projects after no development for years, including some I didn't write myself as a contractor. It's typically been a fairly painless experience. Typically dependencies go from "1.3.1" to "1.7.5" or something, and generally it's a "read changelogs, nothing interesting, updating just works"-type experience.
On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.
It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.
there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.
Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.
It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.
One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].
I agree that sqlc has limits, but for me it is great because it takes care of 98% of the queries (made up number) and keeps them simple to write. I can still write manual queries for the rest of them so it's still a net win.
It gets mentioned a lot in the context of database/sql and sqlc, but Jet has been a great alternative so far, most notably because of its non-issue with dynamic queries support.
That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.
It's controversial for many good reasons. You make the general claim that retrying a db transaction should be the rule, when most experts agree that it should be the exception. Just in the context of web development it can be disputed on the account that a db transaction is just a part of a bigger contract that includes a user at the other end of a network, a request, a session, and a slew of other possible connected services. If one thing shows signs of being unstable, everything should fail. That's the general wisdom.
More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.
Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.
Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.
Transactions are hard, and in reality there's a shit-ton of things people do that has no right to be close to a transaction (but still are), and transactions were a good imperative kludge at the time that has just warped into a monster that people kinda accept over the years.
A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).
You'll just end up looping until your retry limit is reached. SQLite just isn't very good at upgrading read locks to write locks, so the appropriate fix really is to prevent that from happening.
OK, a bunch of the replies here seem to be misunderstanding #1. In particular, the assumption is that the only reason a transaction might fail is that the database is too busy.
I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:
y = shared_state_var;
do {
oldx = y;
newx = f(oldx); // f may be arbitrarily complicated
} while((y = cmpxchg(&shared_state_var, oldx, newx)) != oldx);
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:
- Read shared_state_var
- If and only if this value if equal to oldx, set it to newx
- In any case, return oldx
In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.
From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.
That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.
Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.
I always see SQLite as recommended, but every time I look into it there are some non-obvious subtleties around txn lock, retry behavior, and WAL mode. By default if you don't tweak things right getting frequent SQLITE_BUSY errors seems to occur at non-trivial QPS.
Is there a place that documents what the set-and-forget setting should be?
You shouldn't blindly retry things that fail as a default, and you should really not default into making the decision of what to do on a server that is just on the middle between the actual user and the database.
Handling errors on the middle is a dangerous optimization.
Others have said much about a transaction loop, but I also don't think that database transactions are necessarily designed to fail in the sense that the failure is a normal mode of operation. Failing transactions are still considered exceptional; their sole goal is to provide logical atomicity.
GOMEMLIMIT has really cut down on the amount of time I’ve had to spend worrying about the GC. I’d recommend it. Plus, if you’re using kubernetes or docker, you can automatically set it to the orchestrator-managed memory limit using something like https://github.com/KimMachineGun/automemlimit — no need to add any manual config at all.
stdlib templates are a bit idiosyncratic and probably not the easiest to start with, but they do work and don't have "weird issues" AFAIK. What issues did you encounter?
I am just trying Templ. I like what I am seeing for the most part. There are some tooling ergonomics to work out. Lots of "suddenly the editor things everything is an error and nothing will autoimport or format" back to mostly working. Click to definition goes to the autogenerated code instead of the templ file. Couple things like that. But soooooooooo much better to deal with code gen than html/template. That thing is a pita
Good to see author's mention about routing. I am mentally stuck with mux for a long time and didn't pay attention to the new release features. Happy that I always find things like these on HN.
What I love about Go is its simplicity and no framework dependency. Go is popular because it has no dominating framework. Nothing wrong with frameworks when it fits the use case but I feel that we have become over dependent on framework and Go brings that freshness about just using standard libraries to create something decent with some decent battle tested 3rd party libraries.
I personally love "library over framework" mindset and I found Go to do that best.
Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.
Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.
I like Go for this reason as well. In Python I found the Flask framework to be suitably unobtrusive enough to be nice to use (never liked Django), but deploying python is a hassle. Go is much better in that area. The error handling never bothered me either.
I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.
I recently put together a "stack" of libraries to use for a new webapp project in Go; here's what I ended up using:
- go-chi for routing
- pgx for Postgres driver
- pressly/goose for migrations (I like how it can embed migrations into the binary as long as they are in SQL/not Go)
- go-jet (for type-safe SQL; sort of - it lets you write 100% Go that looks like 98% SQL)
- templ
- htmx (not Go-specific, but feels like a match made in heaven for templ)
- authboss for auth
I'm very happy with all of these choices, except maybe authboss - it's a powerful auth framework, but it took a bit of figuring out since the documentation is not very comprehensive; but it worked out in the end.
Also sometimes if I have two tables where I know I’ll never need to do a JOIN between them, I’ll just put them in separate databases so that I can connect to them independently.
Are the any "fronty" back-end (or straight client/desktop) jobs using Go? i.e. I'd like to use Go on the job but all I see is AWS/Kubernetes/mix-of-DevOps kind of positions.
For application development - Go is underrated. And heavily. I say this coming from Python which is really great but I like the go's damn simplicity more which is reflected everywhere.
What makes me happy is that lots of critical infrastructure tooling is also in Go from datbases to web servers and cluster orchestrators.
I've been using go for a month now in a new job and hate it. It feels like they learned nothing from the past 20 years of language development.
Just one huge problem is that they REPEATED Java's million/billion dollar mistake with nulls. The usual way to get HTTP Headers using Go cannot distinguish between an empty header value and no header at all because the method returns "nil" for both these cases. They could've adopted option types but instead we are back to this 90s bullshit of conflating error types with valid values. If you're programming defensively, every single object reference anywhere has to be checked for nil or risk panicking now.. like why, after we literally named this a billion dollar mistake in Java, why would anyone fucking do this again?
We have helper methods in our codebase just do to this:
ctx.get(thing).map(|x| x == Thing.A).unwrap_or(false)
In Go, we have to make helper methods for the simplest things because the simplest 1-liner becomes 4 lines with the nil/error check after. We have 100 helpers that do some variation of that because everything is so verbose that could would become unreadable without it.
I hate that they made and popularized this backwards dumpster fire of a language when we should know much better by now.
I don't think your example is very compelling but I completely agree with your general point.
I read the Go book by Donovan and Kernighan and I have been working full-time in Go for the last year (my work is otherwise interesting so this is tolerable). It is painfully obvious that the authors are stuck in 1986 in terms of language design. Go is C with modernized tooling (in some ways it's worse...).
It's a horrible idea that has been extremely well executed. And the idea is essentially to make a language as easy as possible for people with imperative language brain damage to learn, make it as simple as possible and then make it simpler than that.
A good example is that despite taking almost everything verbatim from C, the authors decided that the ability to specify that some variable is read-only (i.e. `const`) is "not useful", so one of the few redeeming qualities of C is simply absent from Go.
Go as a language is not fun at all. Nor very good. Weaker type system and less language features that increase productivity and readability than C#, Java, Kotlin and Typescript, no null checks.
Go as a runtime is outstanding.
Go's tooling, stability and governance are very good.
Sounds like you are storing interfaces in the context. I wouldn't do that. I do hit the occasional nil reference error, but it is usually very rare. If you deal with concrete types, not interfaces, you don't have to worry about that very weird nil but not nil thing. And always use constructors.
I fully agree that Go's error handling needs improvement, but you have gone into a rage about the language while fully forgetting the VERY basic Go "comma ok" idiom used everywhere.
I hate that they made and popularized this backwards dumpster fire ..
Well I think you should hate the fact that other language authors despite using better programming paradigm and using advancement in last 2 decades have not been able popularize their efforts enough to wipe out Go.
Go devs did what they did and made it open source. And from what I see they did not do any relentless marketing to make it popular.
[+] [-] voigt|1 year ago|reply
To me this is one of the most underrated qualities of go code.
Go is a language that I started learning years ago, but did't change dramatically. So my knowledge is still useful, even almost ten years later.
[+] [-] arp242|1 year ago|reply
On the frontend side it's typically been much more difficult. There are tons of dependencies, everything depends on everything else, there are typically many new major releases, and things can break in pretty non-obvious ways.
It's not so much the language itself, it's the ecosystem as a whole. There is nothing in JavaScript-the-language or npm-the-package-manager that says the npm experience needs to be so dreadful. Yet here we are.
[+] [-] yegle|1 year ago|reply
Having a true single binary bundling your static resources is so convenient.
[+] [-] linhns|1 year ago|reply
https://github.com/golang/pkgsite
[+] [-] avinassh|1 year ago|reply
there’s just 1 static binary, all I need to do to deploy it is copy the binary. If there are static files I can just embed them in the binary with embed.
[+] [-] dullcrisp|1 year ago|reply
[+] [-] okibry|1 year ago|reply
[+] [-] imiric|1 year ago|reply
As for sqlc, I really wanted to like it, but it had some major limitations and minor annoyances last time I tried it a few months ago. You might want to go through its list of issues[1] before adopting it.
Things like no support for dynamic queries[2], one-to-many relationships[3], embedded CTEs[4], composite types[5], etc.
It might work fine if you only have simple needs, but if you ever want to do something slightly sophisticated, you'll have to fallback to the manual approach. It's partly understandable, though. It cannot realistically support every feature of every DBMS, and it's explicitly not an ORM. But I still decided to stick to the manual approach for everything, instead of wondering whether something is or isn't supported by sqlc.
One tip/gotcha I recently ran into: if you run Go within containers, you should set GOMAXPROCS appropriately to avoid CPU throttling. Good explanation here[6], and solution here[7].
[1]: https://github.com/sqlc-dev/sqlc/issues/
[2]: https://github.com/sqlc-dev/sqlc/issues/3414
[3]: https://github.com/sqlc-dev/sqlc/issues/3394
[4]: https://github.com/sqlc-dev/sqlc/issues/3128
[5]: https://github.com/sqlc-dev/sqlc/issues/2760
[6]: https://kanishk.io/posts/cpu-throttling-in-containerized-go-...
[7]: https://github.com/uber-go/automaxprocs
[+] [-] bornfreddy|1 year ago|reply
[+] [-] 0x_rs|1 year ago|reply
https://github.com/go-jet/jet/
[+] [-] gwd|1 year ago|reply
OK, here's a potentially controversial opinion from someone coming into the web + DB field from writing operating systems:
1. Database transactions are designed to fail
Therefore
2. All database transactions should done in a transaction loop
Basically something like this:
https://gitlab.com/martyros/sqlutil/-/blob/master/txutil/txu...
That loop function should really have a Context so it can be cancelled; that's future work. But the idea stands -- it should be considered normal for transactions to fail, so you should always have a retry loop around them.
[+] [-] mekoka|1 year ago|reply
More specific to the code that you linked to, the retry happens in only two specific cases. Even then, I personally don't find what it's doing to be such great engineering. It hacks its way around something that should really be fixed by properly setting the db engine. By encroaching like this, it effectively hides the deeper problem that SQLite has been badly configured, which may come to bite you later.
Failing transactions would raise a stink earlier. Upon inquiry, you'd find the actual remedy, resulting in tremendous performance. Instead, this magic loop is trying to help SQLite be a database and it does this in Go! So you end up with these smart transactions that know to wait in a queue for their turn. And for some time, nobody in the dev team may be aware that this can become a problem, as everything seems to be working fine. The response time just gets slightly longer and longer as the load increases.
Code that tries to save failing things at all cost like this also tends to do this kind of glue and duct tape micromanaging of dependencies. Usually with worse results than simply adjusting some settings in the dependencies themselves. You end up with hard to diagnose issues. The code itself becomes hard to reason about as it's peppered with complicated ifs and buts to cover these strange cases.
[+] [-] whizzter|1 year ago|reply
A loop is a bad construct imho, something I like far better is the Mnesia approach that simply decides that transactional updates are self-contained functional blocks and the database manages the transactional issues (yes, this eschews the regular SQL interfaces and Db-application separation but could probably be emulated to a certain degree).
https://www.erlang.org/doc/apps/mnesia/mnesia_chap4.html
[+] [-] tedunangst|1 year ago|reply
[+] [-] Thaxll|1 year ago|reply
[+] [-] gwd|1 year ago|reply
I come from the field of operating systems, and specifically Xen, where we extensively use lockless concurrency primitives. One prime example is a "compare-exchange loop", where you do something like this:
Basically this reads oldx, mutates it into newx (using perhaps a quite complicated set of logic). Then the compare exchange will atomically:- Read shared_state_var
- If and only if this value if equal to oldx, set it to newx
- In any case, return oldx
In the common case, when there's no contention, you read the old value, see that it hasn't changed, and then write the new value. In the uncommon case, you notice that someone else has changed the value, and so you'd better re-run the calculations.
From my perspective, database transactions are the same thing: You start a transaction, read some old values, you make some changes on those values. When you commit the transaction, if some of the the thing's you've read have been changed in the meantime, the transaction will fail and you start over again.
That's what I mean when I say "database transactions are designed to fail". Of course the transaction may fail because you have a connection issue, or a disk issue, or something like that; that's not really what I'm talking about. I'm saying specifically that there may be a data race due to concurrent accesses. Whenever there are more than one thing accessing the database, there is always the chance of this happening, regardless of how busy the system is -- even if in an entire week you only have two transactions, there's still a chance (no matter how small) that they'll be interleaved such that one transaction reads something which is then written to before the transaction is done.
Now SQLite can't actually have this sort of conflict, because it's always single-writer. But essentially what that means is that there's a conflict every time where there are two writes, not only when some data was overwritten by another process. Something that happens at a very very low rate when you're using a proper RDBMS like Postgres, now happens all the time. But the problem isn't with SQLite, it's with your code, which has assumed that transactions will never fail do to concurrency issues.
[+] [-] krackers|1 year ago|reply
Is there a place that documents what the set-and-forget setting should be?
[+] [-] marcosdumay|1 year ago|reply
Handling errors on the middle is a dangerous optimization.
[+] [-] lifthrasiir|1 year ago|reply
[+] [-] returningfory2|1 year ago|reply
[+] [-] rad_gruchalski|1 year ago|reply
[+] [-] physicles|1 year ago|reply
[+] [-] nickzelei|1 year ago|reply
[+] [-] arccy|1 year ago|reply
[+] [-] trustno2|1 year ago|reply
Sooner or later you will hit html/template, and realize it's actually very weird and has a lot of weird issues.
Don't use html/template.
I grew to like Templ instead
[+] [-] emmanueloga_|1 year ago|reply
Another go mod that helps a lot when massaging JSON (something most web servers end up doing sooner or later) is GJSON [2].
--
1: https://github.com/a-h/templ
2: https://github.com/tidwall/gjson
[+] [-] arp242|1 year ago|reply
[+] [-] sethammons|1 year ago|reply
[+] [-] srameshc|1 year ago|reply
[+] [-] JodieBenitez|1 year ago|reply
[+] [-] codegeek|1 year ago|reply
I personally love "library over framework" mindset and I found Go to do that best.
Also, whether you want to build a web app or cli tool, Go wins there (for me at least). And I work a lot with PHP and .NET as well and love all 3 overall.
Not to mention how easy was it for someone like me who never wrote Go before to get up and running with it quickly. Oh did I mention that I personally love the explicit error handling which gets a lot of hate (Never understood why). I can do if err != nil all day.
A big Go fan.
[+] [-] jeffreyrogers|1 year ago|reply
I think if Go shipped better support for auth/sessions in the standard library more people would use it. Having to write that code yourself (actually not very hard, but intimidating if you've never done it before) deters people and ironically the ease of creating Go packages makes it unclear which you should use if you're not going to implement it yourself.
[+] [-] geoka9|1 year ago|reply
- go-chi for routing - pgx for Postgres driver - pressly/goose for migrations (I like how it can embed migrations into the binary as long as they are in SQL/not Go) - go-jet (for type-safe SQL; sort of - it lets you write 100% Go that looks like 98% SQL) - templ - htmx (not Go-specific, but feels like a match made in heaven for templ) - authboss for auth
I'm very happy with all of these choices, except maybe authboss - it's a powerful auth framework, but it took a bit of figuring out since the documentation is not very comprehensive; but it worked out in the end.
[+] [-] ncruces|1 year ago|reply
Also sometimes if I have two tables where I know I’ll never need to do a JOIN between them, I’ll just put them in separate databases so that I can connect to them independently.
If this data belongs together and you're just doing this to improve concurrency, this may be a case where BEGIN CONCURRENT helps: https://sqlite.org/src/doc/begin-concurrent/doc/begin_concur...
If you want to experiment with BEGIN CONCURRENT in Go you could do worse than try my SQLite driver: https://github.com/ncruces/go-sqlite3
Import this package to get the version with BEGIN CONCURRENT: https://github.com/ncruces/go-sqlite3/tree/main/embed/bcw2
[+] [-] kristianp|1 year ago|reply
[+] [-] eliben|1 year ago|reply
https://eli.thegreenplace.net/2021/go-https-servers-with-tls...
[+] [-] rmac|1 year ago|reply
go build ./... Goes where ?
[+] [-] coffeeindex|1 year ago|reply
go test ./… tests all files in the project, so I assume build does something similar.
[+] [-] zerr|1 year ago|reply
[+] [-] wg0|1 year ago|reply
What makes me happy is that lots of critical infrastructure tooling is also in Go from datbases to web servers and cluster orchestrators.
[+] [-] zaptheimpaler|1 year ago|reply
Just one huge problem is that they REPEATED Java's million/billion dollar mistake with nulls. The usual way to get HTTP Headers using Go cannot distinguish between an empty header value and no header at all because the method returns "nil" for both these cases. They could've adopted option types but instead we are back to this 90s bullshit of conflating error types with valid values. If you're programming defensively, every single object reference anywhere has to be checked for nil or risk panicking now.. like why, after we literally named this a billion dollar mistake in Java, why would anyone fucking do this again?
We have helper methods in our codebase just do to this:
In any sane language this is one line: In Go, we have to make helper methods for the simplest things because the simplest 1-liner becomes 4 lines with the nil/error check after. We have 100 helpers that do some variation of that because everything is so verbose that could would become unreadable without it.I hate that they made and popularized this backwards dumpster fire of a language when we should know much better by now.
[+] [-] cle|1 year ago|reply
HTTP headers in Go are maps, which have a built-in mechanism for checking key existence, which distinguishes b/t empty and missing. No nils involved.
[+] [-] cultureswitch|1 year ago|reply
I read the Go book by Donovan and Kernighan and I have been working full-time in Go for the last year (my work is otherwise interesting so this is tolerable). It is painfully obvious that the authors are stuck in 1986 in terms of language design. Go is C with modernized tooling (in some ways it's worse...).
It's a horrible idea that has been extremely well executed. And the idea is essentially to make a language as easy as possible for people with imperative language brain damage to learn, make it as simple as possible and then make it simpler than that.
A good example is that despite taking almost everything verbatim from C, the authors decided that the ability to specify that some variable is read-only (i.e. `const`) is "not useful", so one of the few redeeming qualities of C is simply absent from Go.
[+] [-] WuxiFingerHold|1 year ago|reply
Go as a runtime is outstanding.
Go's tooling, stability and governance are very good.
Nothing is perfect. Enter into your compromise.
[+] [-] sethammons|1 year ago|reply
[+] [-] lenkite|1 year ago|reply
Please read Effective Go https://go.dev/doc/effective_go before making production software.
[+] [-] Mikushi|1 year ago|reply
[+] [-] tomerbd|1 year ago|reply
[+] [-] geodel|1 year ago|reply
Well I think you should hate the fact that other language authors despite using better programming paradigm and using advancement in last 2 decades have not been able popularize their efforts enough to wipe out Go.
Go devs did what they did and made it open source. And from what I see they did not do any relentless marketing to make it popular.
[+] [-] unknown|1 year ago|reply
[deleted]