top | item 31467077

Go nulls and SQL

67 points| mnvrth | 3 years ago |ente.io | reply

75 comments

order
[+] Mawr|3 years ago|reply
Great, except:

1. The type is merely a weak hint that you should check .Valid before you use the value, there's no enforcement: https://go.dev/play/p/nS8RxGujMBk

It's much better when the struct members are private and the only way to access the value is through a method that returns (value, bool):

  if value, ok := optional.Get(); ok {
    // value is valid
  } else {
    // value is invalid
  }
This is a strong hint that you should check the bool before using the value. It's also a common go pattern - checking for existence of a key in a map is done this way.

2. We have generics now. Why use type-and-sql-specific wrappers when you could use a generic Option? Example implementation: https://gist.github.com/MawrBF2/0a60da26f66b82ee87b98b03336e....

[+] psadauskas|3 years ago|reply
Your comment is helpful, so this sarcasm isn't directed at you. However, this looks like an extremely cumbersome way to wish your language had the Result monad.
[+] tedunangst|3 years ago|reply
> Pointers are easy first, but then you realize that you have to put nilness checks everywhere.

Well, yeah, if something may or may not be there, you have to check before you use it. How else do people want it to work?

[+] solar-ice|3 years ago|reply
In some languages, the concept of "may or may not be there" is cleanly separated from "exists in a separate block of memory". There's plenty of pointers which will never be nil - checking that they're nil at the entry and exit points of every function is line noise.
[+] xdfgh1112|3 years ago|reply
Sane languages have pointers which either can't be null or require you to check explicitly. Rather than leaving it to explode at runtime.
[+] anoctopus|3 years ago|reply
If something may or may not be there, that should be a clear part of its type that the compiler checks you handled (even if it's just you deciding to panic, because sometimes that's the right thing to do). Then there can be a type for just the thing, no hidden possibility of it not being there that you have to check for, and almost all code can be written for the type that is guaranteed to be there, with only a little code dealing with only possibly-there values.
[+] davidkuennen|3 years ago|reply
Using go and postgres for my App's backend.

After using NULLs this way at first, I noticed it's generally much much easier for me to just avoid nullable SQL columns wherever possible (it was always possible so far). Most of the time there is a much easier was to say a value is empty. For strings '' for example.

This seriously made everything so much easier. Not necessarily anything to do with go tho.

[+] sa46|3 years ago|reply
That's the approach I've taken as well. We use coalesces to ensure queries to never return null and handle the zero value or a sentinel value specially. Since our API layer is protobufs which don't have nil, the zero value is correct the majority of the time.
[+] SoftTalker|3 years ago|reply
I think this is generally good, unless your database treats NULL and '' as the same thing.... e.g. Oracle.
[+] layer8|3 years ago|reply
> This seriously made everything so much easier.

Except inner joins and the like, I guess.

[+] turboponyy|3 years ago|reply
You should disallow nulls as much as possible, but "" is just as valid of a string value as "John Smith"; if the field is actually semantically nullable, reflect that in the type - don't use arbitrary "blank" values to denote nulls.
[+] hans_castorp|3 years ago|reply
> by declaring our SQL columns as NOT NULL when possible.

I hear this advice quite often, but that doesn't relief you of handling NULL values. E.g. a NOT NULL column can be NULL in a result of an outer join.

[+] eknkc|3 years ago|reply
So, to avoid checking for null, you'll check for `NullString.Valid` now? The string pointer is a part of the language. You can pass it around, expose as part of a library etc. And it conveys the intent perfectly.

I have no idea what is the issue here?

[+] psanford|3 years ago|reply
The nice thing about the Null* type is they can reduce the number of allocations done on the heap and thus also reduce total GC your program needs to do.
[+] legorobot|3 years ago|reply
I think the idea is `sql.NullString` can be used to have a SQL NULL still be an empty string in Go (just avoid checking the `Valid` field, or cast to string -- no check necessary).

It seems like the intent of a string pointer vs. `sql.NullString` is their goal with this type anyways[1]?

In practice I've used a string pointer when I need a nullable value, or enforce `NOT NULL` in the DB if we can't/don't want to handle null values. Use what works for you, and keep the DB and code's expectations in-sync.

[1]: https://groups.google.com/g/golang-nuts/c/vOTFu2SMNeA/m/GB5v...

[+] catlifeonmars|3 years ago|reply
Sometimes you want a non-nullable string pointer. In Go, nullability and dereferencing are coupled under the same language construct (pointers).
[+] jackielii|3 years ago|reply
I have an alternative solution to work with an existing struct that you can't change, e.g. protobuf generated code: https://jackieli.dev/posts/pointers-in-go-used-in-sql-scanne...

  type nullString struct {
    s *string
  }
  
  func (ts nullString) Scan(value interface{}) error {
    if value == nil {
        *ts.s = "" // nil to empty
        return nil
    }
    switch t := value.(type) {
    case string:
        *ts.s = t
    default:
        return fmt.Errorf("expect string in sql scan, got: %T", value)
    }
    return nil
  }
  
  func (n *nullString) Value() (driver.Value, error) {
    if n.s == nil {
        return "", nil
    }
    return *n.s, nil
  }
Then use it:

  var node struct { Name string }
  db.QueryRow("select name from node where id=?", id).Scan(nullString(&node.Name))
[+] brushyamoeba|3 years ago|reply
I'm surprised this doesn't discuss JSON marshalling
[+] cratermoon|3 years ago|reply
Yea I ran into this the first time I wrote much Go code to deal with SQL databases. After being initially annoyed I realized how much more correct it is for the SQL interface code to check .Valid and write proper, meaningful special case/null object handling for the domain types.
[+] AtNightWeCode|3 years ago|reply
Maybe I passed out among the nonsense but most langs provide a dbnull constant to check against, no?
[+] jrockway|3 years ago|reply
You could do that in Go, but it's not what database/sql's API supports. You read columns out of each row with "row.Scan(&col1, &col2, ...)". The types of col1 and col2 are declared at compile time, and they don't have to be able to represent the concept of null. So there would be no way to store the state that represents that something was null.

You could of course have an API that just returns a slice of "any", and conditionally check whether a value is of type "string" or "mylibrary.NullValue" after the fact. This isn't clearly better to me than the Scan API. You are going to have to eventually cast "any" to a real type in order to use it; with Scan the library does that for you.

Your own types can implement sql.Scanner to control exactly how you want to handle something. (Indeed, your "Scan" method receives something with type "any", and you need to check what the type is and convert it to your internal representation.)

Also wanted to throw this out here; you don't have to be satisfied with lossy versions of your database's built in types. Libraries like https://pkg.go.dev/github.com/jackc/[email protected] will give you a 1:1 mapping to Postgres's types. (I'm sure other database systems have similar libraries.)