top | item 37224651

Structured logging with slog

275 points| spacey | 2 years ago |go.dev

168 comments

order
[+] mrweasel|2 years ago|reply
Admittedly I'm not a huge fan of having:

  slog.Info("hello, world", "user", os.Getenv("USER"))
It's a little magical that "user" is a key. So what if you have multiple key-value pairs? Arguably it most likely going to be obvious which is the keys, but having every other value be a key and the rest values seems a little clumsy.

I really like Pythons approach where you can have user="value" it makes things a bit more clear.

[+] jupp0r|2 years ago|reply
I think Uber chose a better approach for their Go logging library called Zap [1]

    logger.Info("failed to fetch URL",
      // Structured context as strongly typed Field values.
      zap.String("url", url),
      zap.Int("attempt", 3),
      zap.Duration("backoff", time.Second),
    )
They also have zap.Error(err) which generates the "error" key as a convention.

[1] https://github.com/uber-go/zap

[+] masklinn|2 years ago|reply
> It's a little magical that "user" is a key. So what if you have multiple key-value pairs?

You... add them afterwards? It's really just a plist (https://www.gnu.org/software/emacs/manual/html_node/elisp/Pr...), that's hardly novel. The method takes any number of parameters and pairs them up as key, value.

Or if you really hate yourself, you use LogAttrs with explicitly constructed Attr objects.

> I really like Pythons approach where you can have user="value" it makes things a bit more clear.

The trouble is, Go doesn't have keyword parameters so that doesn't work.

[+] eska|2 years ago|reply
If you had read more than the introduction you’d have found a paragraph about this in the middle and multiple at the end.
[+] packetlost|2 years ago|reply
Yeah, I agree. Passing in an optional `map[string]string` or something would be better, but then you get into having to either pass in `nil` every time you don't have the extra data or needing an entirely different function for with vs without the map
[+] baq|2 years ago|reply
this is not uncommon. this is what I've been dealing with earlier today in Postgres:

  json_build_object ( VARIADIC "any" ) → json

  jsonb_build_object ( VARIADIC "any" ) → jsonb

  Builds a JSON object out of a variadic argument list.
  By convention, the argument list consists of alternating keys and values.
  Key arguments are coerced to text; value arguments are converted as per to_json or to_jsonb.

      json_build_object('foo', 1, 2, row(3,'bar')) → {"foo" : 1, "2" : {"f1":3,"f2":"bar"}}
[+] taeric|2 years ago|reply
Plist or named arguments doesn't really make too big of a difference, to my eyes. I do recommend Mapped Diagnostic Context style approaches, if you can do it. Passing all arguments that you think may be useful to logs gets unwieldy quickly. Looks particularly absurd when folks start adding parameters to functions for the sole purpose of letting you log them.
[+] yencabulator|2 years ago|reply
Personally I only intend to ever use LogAttrs/AddAttrs/GroupValue, for this reason. I really don't want to count evens and odds.
[+] mlhpdx|2 years ago|reply
Indeed. Logging a map-like object seems like a pretty basic expectation. Separate keys and values as parameters seem very C-like.
[+] candiddevmike|2 years ago|reply
Today I'm pulling this out of a context object, can I still do that?
[+] hamandcheese|2 years ago|reply
The same pattern can be seen in Java using Map.of(...)
[+] corytheboyd|2 years ago|reply
Structured logging is such an easy to gain, massive improvement to observability. Assuming you can pay for the log processor to make sense of it all :)

I’ve been working on a side project to bring something like the DataDog log explorer to the local development environment. The prototype I made has already been extremely helpful in debugging issues in a very complex async ball of Rails code. Does something like that sound useful to other folks? Does it already exist and I just can’t find it?

[+] scottlamb|2 years ago|reply
> Does something like that sound useful to other folks?

Very much so!

One piece I'd like to have is a good output format from my program. Right now I have stuff outputting logs in JSON format to stderr/local file (then picked up by the Opentelemetry OTLP collector, Datadog agent, AWS CloudWatch, whatever) and the traces/spans actually sent from my process to the collector over the network. It baffles me why the traces/spans are done in that way rather than placed in a local file (and for compressibility, ideally the same file as the logs). The local file method would make it easier to have simple local-focused tools, would lower the requirements for outputting these records (no need to have async+http client set up, just the writer thread), and would better handle the collector being in a bad mood at the moment.

That's the most important piece, but if getting that done requires inventing a new format (it seems to?!?), there are some other details I'd like to it to do well. You should be able to see all the stuff associated with each log entry's matching span, which means that spans and attributes should be recorded on open, not just on close. (Attributes can also be updated mid-span; the Rust tracing library allows this.) (OpenTelemetry notably only supports complete spans). It could be more efficient by interning the keys. some care about how rotation is handled, timestamps are handled, etc.

[+] Too|2 years ago|reply
The Grafana stack, including Loki for logs, can be started easily in a docker-compose project in your local environment. Not that Grafanas log viewer is the best in class but it’s at least something. For terminal fans there is a logcli.

If you are concerned with cost of the log processor, Loki again has your back by being very easy and lightweight to deploy. Giving you same tools in dev as in prod.

[+] merightnow|2 years ago|reply
Have you checked lnav?
[+] ar_lan|2 years ago|reply
I'm so happy this is a stdlib feature. This is good enough for me to not need to bring in external loggers (e.g. zerolog) which is nice, and I strongly think that structured logging should be the default logging format.

Logs are data.

[+] dgb23|2 years ago|reply
The rationale:

> With many structured logging packages to choose from, large programs will often end up including more than one through their dependencies. The main program might have to configure each of these logging packages so that the log output is consistent: it all goes to the same place, in the same format. By including structured logging in the standard library, we can provide a common framework that all the other structured logging packages can share.

This is IMO the right way of doing it. Provide an interface with simple defaults, usable out of the box. Those who need more can use a library that builds towards the interface.

So when evaluating any library, you can ask "How well does this integrate with interfaces in the standard library?". Discovering that some functionality is just a "Fooer" that pieces well together with existing stuff is calming. Not only do you already know how to "Foo", you also get a hidden stability benefit: There's an implied API surface contract here.

This is in stark contrast to the "builds on top of" approach, where you end up with competing, idiosyncratic interfaces. This is often necessary, but there is always an implied risk in terms of maintenance and compatibility.

[+] ljm|2 years ago|reply
Something like this would be a welcome addition to Ruby/Rails where you have to pull in dependencies that patch the multiple independent loggers in the stack, some of which break messages onto multiple lines, not to mention the common case of inconsistent tagged and structural logging in your application code.

It’s a lot of effort when all you want is to log everything to STDOUT, in JSON, but you have to choose one of half a dozen logging libraries that all behave extremely differently.

[+] gdprrrr|2 years ago|reply
Rust seems to do find with a de-facto logging library, and the Java ecosystem seems to have converged on a common API, but with a lot of effort I think.
[+] baz00|2 years ago|reply
Now all we need is the 1,000,000 other components in the multiple ecosystems to log in the same format and I won't have a perpetual headache.

Good job Go though for being opinionated but rational.

[+] geodel|2 years ago|reply
With this another most requested feature is covered by Go. This leaves error handling, enum type which are often asked by users but are not actively being worked on for now.
[+] nwsm|2 years ago|reply
It's nice to have this in the standard library, but it doesn't solve any existing pain points around structured log metadata and contexts. We use zap [0] and store a zap logger on the request context which allows different parts of the request pipeline to log with things like tenantId, traceId, and correlationId automatically appended. But getting a logger off the context is annoying, leads to inconsistent logging practices, and creates a logger dependency throughout most of our Go code.

[0] https://github.com/uber-go/zap

[+] zknill|2 years ago|reply
log/slog package essentially delegates writing log messages to some "handler" interface. The key method is:

    Handle(context.Context, Record) error
This method has access to the context, which means you can get the logging handler to extract values from the context. Instead of storing the logger on the context, you can extract the traceId, etc values from the context and log those.

It's a little bit involved to write a whole logger from scratch, but you can 'wrap' the existing logger handlers and include values from the context relatively easily.

There are examples in this project (which aims to help solve your usecase): https://github.com/zknill/slogmw

[+] jasonhansel|2 years ago|reply
I must admit: I'm not a huge fan of structured logging, beyond simple use cases like tagging messages by the thread that produced them. If you want something machine-readable, use a dedicated metrics system, analytics database, or document store. If you want something human-readable, structured logging will only make things worse.
[+] zknill|2 years ago|reply
The new structured logging library is a great addition, its nice to have structured logging in the standard lib.

It's easy to get started with log/slog and one of the built in handlers, but as soon as you want to change something the library design pushes you towards implementing an entire handler.

For example, if I want the built in JSON format, but with a different formatting of the Time field, that's not easy to do. It's not obvious how to change the built in handler.

I wrote slogmw[1] to solve this problem. It's a set of middleware and examples that make it easy to make small changes to the built in handlers without having to write a whole new handler from scratch.

[1] https://github.com/zknill/slogmw

[+] politician|2 years ago|reply
I wish there was a better approach for the problem of avoiding function calls when the log level at runtime is higher than the call site.

So,

    slog.Info("failed to frob", "thing", GetThing(1)) 
Still calls GetThing(1) when the log level is greater than Info. The only solution right now for this is to test the log level before making the logging call. It would be amazing if language designers could make the arguments late bound instead or used aspect-oriented programming approaches to protect each logging call site.
[+] Seb-C|2 years ago|reply
I guess it's nice to have a standard, but I wish the Golang developers stopped introducing stuff like "args ...any" all over the place in the standard library.

It's not the level of type-safety that I expect from a strongly typed language.

[+] kusha|2 years ago|reply
Oof. We just converted all of our logging to zap[0] to get structured JSON logging for downstream parsing. Wonder how the perf stacks up.

[0]: https://github.com/uber-go/zap

[+] tastysandwich|2 years ago|reply
I'm really glad they've introduced this, I just wish it also had the traditional formatting methods, eg Infof, Debugf, Errorf, etc, for backwards compatibility.

I've got a few packages that accept a basic logger interface, eg:

  type debugLogger interface {
      Debugf(format string, args ...any)
  }

  type MyThing struct {
    logger debugLogger
  }

  func New(logger debugLogger) *MyThing {
    return &MyThing{logger}
  }
I'd love to switch to slog but I'll have to v2 these packages now.
[+] __loam|2 years ago|reply
Slog is an amazing name
[+] insanitybit|2 years ago|reply
Structured logging is a very sane default. Even if you end up with `{"msg": "blah blah blah"}` at least you have room to grow in the future.
[+] candiddevmike|2 years ago|reply
Looking for advice: logging for servers/services tends to be different than logging for CLI-based applications. How do folks differentiate them or use slog for them in a generic way? Or does it make sense to have separate logging packages for CLI vs services? CLI tends to be more verbose and procedural vs servers/service based logging which is more errors only unless debug.
[+] Femolo|2 years ago|reply
Log everything as Json!

We need to start making Json viewer were we look into log files without tooling the default.

It's so much easier to use Json automatically and ship them to systems out of the box.

Linux logging should do that too. Not just container in k8s

[+] mi_lk|2 years ago|reply
Q: what do people use for structured logging in Java?
[+] bobbyi|2 years ago|reply
Walking past an eatery with outdoor seating, I overheard one diner say the phrase "process raw logs" and I thought, "wow, I guess that is one of those tricky problems that basically everyone ends up dealing with".

And then I heard "... with a chainsaw. It's a chainsaw mill" and realized I may have misunderstood the context.