1. The type is merely a weak hint that you should check .Valid before you use the value, there's no enforcement: https://go.dev/play/p/nS8RxGujMBk
It's much better when the struct members are private and the only way to access the value is through a method that returns (value, bool):
if value, ok := optional.Get(); ok {
// value is valid
} else {
// value is invalid
}
This is a strong hint that you should check the bool before using the value. It's also a common go pattern - checking for existence of a key in a map is done this way.
Your comment is helpful, so this sarcasm isn't directed at you. However, this looks like an extremely cumbersome way to wish your language had the Result monad.
In some languages, the concept of "may or may not be there" is cleanly separated from "exists in a separate block of memory". There's plenty of pointers which will never be nil - checking that they're nil at the entry and exit points of every function is line noise.
If something may or may not be there, that should be a clear part of its type that the compiler checks you handled (even if it's just you deciding to panic, because sometimes that's the right thing to do). Then there can be a type for just the thing, no hidden possibility of it not being there that you have to check for, and almost all code can be written for the type that is guaranteed to be there, with only a little code dealing with only possibly-there values.
After using NULLs this way at first, I noticed it's generally much much easier for me to just avoid nullable SQL columns wherever possible (it was always possible so far). Most of the time there is a much easier was to say a value is empty. For strings '' for example.
This seriously made everything so much easier. Not necessarily anything to do with go tho.
That's the approach I've taken as well. We use coalesces to ensure queries to never return null and handle the zero value or a sentinel value specially. Since our API layer is protobufs which don't have nil, the zero value is correct the majority of the time.
You should disallow nulls as much as possible, but "" is just as valid of a string value as "John Smith"; if the field is actually semantically nullable, reflect that in the type - don't use arbitrary "blank" values to denote nulls.
So, to avoid checking for null, you'll check for `NullString.Valid` now? The string pointer is a part of the language. You can pass it around, expose as part of a library etc. And it conveys the intent perfectly.
The nice thing about the Null* type is they can reduce the number of allocations done on the heap and thus also reduce total GC your program needs to do.
I think the idea is `sql.NullString` can be used to have a SQL NULL still be an empty string in Go (just avoid checking the `Valid` field, or cast to string -- no check necessary).
It seems like the intent of a string pointer vs. `sql.NullString` is their goal with this type anyways[1]?
In practice I've used a string pointer when I need a nullable value, or enforce `NOT NULL` in the DB if we can't/don't want to handle null values. Use what works for you, and keep the DB and code's expectations in-sync.
Yea I ran into this the first time I wrote much Go code to deal with SQL databases. After being initially annoyed I realized how much more correct it is for the SQL interface code to check .Valid and write proper, meaningful special case/null object handling for the domain types.
You could do that in Go, but it's not what database/sql's API supports. You read columns out of each row with "row.Scan(&col1, &col2, ...)". The types of col1 and col2 are declared at compile time, and they don't have to be able to represent the concept of null. So there would be no way to store the state that represents that something was null.
You could of course have an API that just returns a slice of "any", and conditionally check whether a value is of type "string" or "mylibrary.NullValue" after the fact. This isn't clearly better to me than the Scan API. You are going to have to eventually cast "any" to a real type in order to use it; with Scan the library does that for you.
Your own types can implement sql.Scanner to control exactly how you want to handle something. (Indeed, your "Scan" method receives something with type "any", and you need to check what the type is and convert it to your internal representation.)
Also wanted to throw this out here; you don't have to be satisfied with lossy versions of your database's built in types. Libraries like https://pkg.go.dev/github.com/jackc/[email protected] will give you a 1:1 mapping to Postgres's types. (I'm sure other database systems have similar libraries.)
[+] [-] Mawr|3 years ago|reply
1. The type is merely a weak hint that you should check .Valid before you use the value, there's no enforcement: https://go.dev/play/p/nS8RxGujMBk
It's much better when the struct members are private and the only way to access the value is through a method that returns (value, bool):
This is a strong hint that you should check the bool before using the value. It's also a common go pattern - checking for existence of a key in a map is done this way.2. We have generics now. Why use type-and-sql-specific wrappers when you could use a generic Option? Example implementation: https://gist.github.com/MawrBF2/0a60da26f66b82ee87b98b03336e....
[+] [-] psadauskas|3 years ago|reply
[+] [-] tedunangst|3 years ago|reply
Well, yeah, if something may or may not be there, you have to check before you use it. How else do people want it to work?
[+] [-] solar-ice|3 years ago|reply
[+] [-] xdfgh1112|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] anoctopus|3 years ago|reply
[+] [-] davidkuennen|3 years ago|reply
After using NULLs this way at first, I noticed it's generally much much easier for me to just avoid nullable SQL columns wherever possible (it was always possible so far). Most of the time there is a much easier was to say a value is empty. For strings '' for example.
This seriously made everything so much easier. Not necessarily anything to do with go tho.
[+] [-] sa46|3 years ago|reply
[+] [-] SoftTalker|3 years ago|reply
[+] [-] layer8|3 years ago|reply
Except inner joins and the like, I guess.
[+] [-] turboponyy|3 years ago|reply
[+] [-] hans_castorp|3 years ago|reply
I hear this advice quite often, but that doesn't relief you of handling NULL values. E.g. a NOT NULL column can be NULL in a result of an outer join.
[+] [-] eknkc|3 years ago|reply
I have no idea what is the issue here?
[+] [-] psanford|3 years ago|reply
[+] [-] legorobot|3 years ago|reply
It seems like the intent of a string pointer vs. `sql.NullString` is their goal with this type anyways[1]?
In practice I've used a string pointer when I need a nullable value, or enforce `NOT NULL` in the DB if we can't/don't want to handle null values. Use what works for you, and keep the DB and code's expectations in-sync.
[1]: https://groups.google.com/g/golang-nuts/c/vOTFu2SMNeA/m/GB5v...
[+] [-] catlifeonmars|3 years ago|reply
[+] [-] jackielii|3 years ago|reply
[+] [-] brushyamoeba|3 years ago|reply
[+] [-] cratermoon|3 years ago|reply
[+] [-] AtNightWeCode|3 years ago|reply
[+] [-] jrockway|3 years ago|reply
You could of course have an API that just returns a slice of "any", and conditionally check whether a value is of type "string" or "mylibrary.NullValue" after the fact. This isn't clearly better to me than the Scan API. You are going to have to eventually cast "any" to a real type in order to use it; with Scan the library does that for you.
Your own types can implement sql.Scanner to control exactly how you want to handle something. (Indeed, your "Scan" method receives something with type "any", and you need to check what the type is and convert it to your internal representation.)
Also wanted to throw this out here; you don't have to be satisfied with lossy versions of your database's built in types. Libraries like https://pkg.go.dev/github.com/jackc/[email protected] will give you a 1:1 mapping to Postgres's types. (I'm sure other database systems have similar libraries.)