structured, leveled logging #54763
Replies: 65 comments 294 replies
-
👋🏻 hclog developer here! Curious about seeing if the stdlib could mostly contain the interface surface area for different implementations to step into. Were that route taken, I'd propose the surface area that looks something like this: type Logger interface {
Log(level int, msg string, args ...any)
Debug(msg string, args ...any)
Info(msg string, args ...any)
Warn(msg string, args ...any)
Error(msg string, args ...any)
With(args ...any) Logger
}
var DefaultLogger Logger
func Debug(msg string, args ...any) {
DefaultLogger.Debug(msg, args...)
}
// Toplevel for Info, Warn, Error, and With
func InContext(ctx context.Context, log Logger) context.Context { ... }
func FromContext(ctx context.Context) Logger { ... } // returns DefaultLogger if none available |
Beta Was this translation helpful? Give feedback.
-
I have a use case where I would like to control logging level based on context (e.g. enabling However, this context instrumentation needs to happen before any other (for example context may already have a |
Beta Was this translation helpful? Give feedback.
-
What would happen if |
Beta Was this translation helpful? Give feedback.
-
Looks promising. Some thoughts:
Why is this being proposed now? Why not 5–8 years ago? This seems to tout handlers as an innovation over the state of the art. Do none of the existing solutions have a comparable handler design? Will this effort to establish a common interface for the community extend to other problem domains, like audio?
In my opinion, the loose What happens if a value is missing?
What if there isn't a Go error value when logging an error condition? What value should be used?
Could this hide configuration or setup mistakes, where there should have been a logger, but there wasn't?
Why Is DebugLevel 31, when the other levels increment by 10?
What is the exact intended initial format, whether or not you want to document it? What is the order of the fields? Are the "built-in" fields first? Are the pairs delimited by a space? Are strings containing whitespace quoted?
Is the JSON minimized? What is the order of the fields? Are the "built-in" fields first?
What are the default keys?
Nit: Consider using the names Field/Fields, like zap. "Attr" is a little more abstract than "field," and "WithFields" is a little clearer and reads a little better than "WithAttrs," in my humble opinion.
Why not include variants for all built-in types, like Int8, Rune or Complex64? Would generics help here?
Have you considered the potential demand for a stack trace option (as opposed to just the current file name and line)?
I don't see a Logger.WithAttrs variant. Is that because it wouldn't help avoid allocations? If so, why is that? |
Beta Was this translation helpful? Give feedback.
-
👋🏽 I'm glad to see this discussion reignited! I'm one of the original zap authors, and may have a little additional context to share. Points below notwithstanding, this proposal seems carefully thought-through and researched. Many thanks to the authors for all the work they've put in 😍 Previous artPeter Bourgon and Chris Hines made a proposal with similar goals in 2017. The overall situation hasn't changed much since then - if anything, the proliferation of log-like APIs in distributed tracing packages has made it even worse. If the authors of this proposal haven't seen the previous doc, it's worth reviewing. I'm not sure if Chris and Peter are still interested in logging, but their perspectives would be valuable here. I particularly liked the very small interface they proposed and the NamespacingI notice that this proposal doesn't include support for namespacing key-value pairs. For example, we might want a gRPC interceptor to produce JSON records like this:
Zap exposes this via Namespace. At least at Uber, this was critical for practical use: like many storage backends, ElasticSearch stops indexing if a field changes type. Implicitly, this forces each binary to have a schema for its log output. It's very hard to do this if dozens of loosely-coordinated packages are writing to a shared, flat keyspace. Namespacing cuts the problem into manageable pieces: each package uses its own name as a namespace, within which it can attempt to adhere to some implicit schema. This problem is significant enough that people have attempted to write static analysis to mitigate it. Of course, not all storage engines have this limitation - but many do. Food for thought. First-class error supportThe Zap spends a fair bit of effort to special-case Edit: on a second reading, I see the Delegating serialization & complex objectsIt's convenient for end users to let types implement their own serialization logic (a la GenericsMy experience with zap is that the mere existence of a faster API, even it's painfully verbose, puts a lot of pressure on developers to use it. Making the lower-allocation API as ergonomic as possible has a disproportionate effect on developer experience. One of my enduring regrets about zap is its fussy API for producing fields. Nowadays, this seems like the sort of problem we ought to be able to solve nicely with a type Attr[T any] struct {
key string
value T
} Minimizing allocations would require a fast path through marshaling that doesn't go through
|
Beta Was this translation helpful? Give feedback.
-
This allows changing or omitting an Attr, but not splitting it out into multiple.
I'd like to see it implement
I'd like to see a standard handler that forwards to
zerolog provides a global logger as a separate package, The other recent (big?) project that I'm aware of in logging standardization is OpenTelemetry's Log Data Model, which has recently been stabilized. There they chose a different mapping of Levels / Severity to numbers, I'll echo the desire for namespaced key-values, |
Beta Was this translation helpful? Give feedback.
-
Thanks for starting this discussion. The sheer amount of logging libraries out there seems to be an indicator that there should be a solution for this in the standard library. A couple thoughts about the proposal: OpenTelemetryI think it would make sense to look at the work the OpenTelemetry working group does...at least to make sure this proposal is not incompatible with what they are working on. I know they don't focus on libraries itself at the moment, but a wire format for logs. Sugared loggerPersonally, I'm quite fond of Zap's default logger enforcing attributes to be Maybe reflecting that in this proposal would also make sense by creating separate implementations. AttrI was wondering if it would make sense to make Attr an interface: type Attr interface {
Key() string
Value() string
} That way converting a value to string could be handed off to a custom Attr implementation. I don't know if that would affect allocations though. Maybe they would, but it's good enough for Zap. This would also allow to create separate types instead of using a single Attr with an I was wondering what the purpose of Kind is. Where would it be useful? ContextPersonally, I prefer extracting information from the context in the logger, not the other way around. Let's say a context travels through a set of layers, each of them adding information to the context that you would like to log. Therefore I'd want to pass the context to the logger somehow, not the other way around. For example, I could implement a handler that extracts all the relevant information from the context. The only question is: how do we pass a context to the logger? |
Beta Was this translation helpful? Give feedback.
-
I think this is promising. A bunch of thoughts:
|
Beta Was this translation helpful? Give feedback.
-
Using genericsIn one of the comments above, there was mention of the possibility of using generics to mitigate some of the API blowup from having many explicitly different I'll expand a bit on my reply above. I think we can use generics to mitigate allocations and reduce the API, even in the absence of #45380. Here's how the API might look. We'd remove the
Another possibility to make the performance implications of creating an
Another possible spelling for As I mentioned in the original comment, it's possible to implement this API without incurring allocations, although there is some performance overhead. |
Beta Was this translation helpful? Give feedback.
-
The docs should document what should happen when |
Beta Was this translation helpful? Give feedback.
-
"attrs" in func (l *Logger) With(attrs ...any) *Logger should be "args" like in func (l *Logger) Warn(msg string, args ...any) to not suggest that attrs must be Attrs. |
Beta Was this translation helpful? Give feedback.
-
I hate the passing the attributes as key, values e.g |
Beta Was this translation helpful? Give feedback.
-
For what it's worth, here's a stab at an alternative to the unstructured k/v design func (*Logger) Any(string, any) *Logger
func (*Logger) Bool(string, bool) *Logger
func (*Logger) Int(string, int) *Logger
func (*Logger) String(string, string) *Logger
// ...
func (*Logger) With(...Field) *Logger
func (*Logger) Log(Level, string)
func (*Logger) LogDepth(int, Level, string)
func (*Logger) Error(string)
func (*Logger) Warn(string)
func (*Logger) Info(string)
func (*Logger) Debug(string)
func (*Logger) Namespace(string) *Logger // Or perhaps "Wrap" instead of "Namespace"
func Any(string, any) Field
func Bool(string, bool) Field
func Int(string, int) Field
func String(string, string) Field
// ... Example use: var pkgLogger *slog.Logger = companyLogger.Namespace("mypkg")
// ...
var endpointLogger *slog.Logger = pkgLogger.Namespace("myendpoint")
// ...
if err != nil {
var errLogger = endpointLogger.Any("save error", err).Int("userid", user.ID)
errLogger = errLogger.Bool("prod", env.Prod)
errLogger.Error("cannot save") // Logs with "save error", "userid", and "prod" keys
// ...
if err != nil {
errLogger.Any("cleanup error", err).Error("cannot cleanup") // Logs with "save error", "userid", "prod", and "cleanup error" keys
}
// ...
}
// ...
if err != nil {
endpointLogger.With(
slog.Any("error", err),
slog.Bool("baz", baz),
slog.Int("bar", bar),
slog.String("foo", foo),
).Error("cannot do thing")
} Highlights:
|
Beta Was this translation helpful? Give feedback.
-
😱 should this be "panics if the value is not a time.Time"?
-- I really like the Record/Handler/Logger arrangement. I think it's really sensible for thinking about and capturing middle-end structure. I'm unclear on whether to consider For consistency, this could be useful (I don't think slog's Error severity should be the only place to expect "err"-keyed Attrs):
Bikeshedding, could imagine renaming The verbosity design doesn't feel immediately intuitive - Why decimal instead of coarse/fine bits? Why use negative verbosities like "WARN-2"? How does this interact with tracing? |
Beta Was this translation helpful? Give feedback.
-
It looks like this interface would integrate fairly well with systemd-journald's native protocol. In particular, the One question that I didn't see mentioned is how duplicate attributes should be handled. The API obviously doesn't do anything to prevent them, but is there any behaviour a handler is expected to implement? For example, consider calls like: l.Info("message", "foo", "value1", "foo", "value2")
l2 := l.With("foo", "value1")
l2.Info("message", "foo", "value2") Are these log messages valid? Is the handler expected to log both values for the attribute? Is it allowed to log only one? If it does only log one value, is it free to choose which one? If it logs multiple values, is it expected to preserve the order? I realise that the answer might be "it is implementation defined", but if that's the case it should probably be spelled out what behaviour callers can rely on. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
can you put debug at 40 or whichever so someone can insert a level between Info and Debug Debug Level = iota
Verbose
Info
Warning
Error
Critical
Fatal so I'd love to be able to map these levels to the new standard somehow (without shifting everything) |
Beta Was this translation helpful? Give feedback.
-
I worry that adding this to the standard library would be done so with an API that doesn't fully satisfy the intended use-cases. There are already several popular and tested libraries available for people to use. The Go team has often said that multiple community solutions are better than one lackluster "official" package, so why does the Go team need to create/maintain/improve a package when there are multiple better alternatives already in the wild? |
Beta Was this translation helpful? Give feedback.
-
This discussion has served its purpose: we now have a proposal. Further discussion should happen there. The implementation is now feature-complete (though lacking polish) and ready for you all to hammer on. I want to thank everyone who participated here for their thoughtfulness, enthusiasm, wisdom and professionalism. It was a pleasure for me to work with you all to improve the design. As a result of this discussion, we added several features, including a way to group attributes into composites ( The API has changed in a number of small ways as well, partly due to this discussion but also during code review. Rather than edit the top post with the new godoc, I thought it would be more useful if I provided a diff of the current godoc and the one at the top. If you want to see the new godoc on its own, scroll to the end of the design doc (review in progress).
|
Beta Was this translation helpful? Give feedback.
-
For anyone else wondering what happened to |
Beta Was this translation helpful? Give feedback.
-
I think what this proposal really needs, and what would help a lot of go programs, is an improvement to go's type inference so that if the function signature is
Then the compiler should allow
And it can understand that as a map literal. |
Beta Was this translation helpful? Give feedback.
-
Hi, nice design, thanks for the great proposal. I especially like the convenience level functions to be used consistently in many codebases. Just as an idea to provide some feedback for discussion: Do you think it would make sense to include another named level in these convenience functions: "Security"? From my IT-Security related work I experience many situations, where security logs are helpful, either in proactively detecting certain security-relevant conditions (mostly done by SIEM tools analyzing logs and monitoring events) and sometimes also during forensic analysis (but then of course also more debug-like logs are used as well). I know that creating such thing individually with this proposed API would be easy (thx also to the gaps in the numbering scheme), but especially the consistency used among many different codebases is where Go really shines. As in the recent releases security also was a major part (thinking about the great addition of fuzzing in the whole toolchain ecosystem), I think that incorporating a convenience logging function "Security" for security-events directly in the top-level API would generate more security-relevant thoughts within development teams of how to also think about certain security-relevant conditions in their software (and how to log these). Mostly I can imagine situations or events like login failure events, missing permissions for some actions, invalid data provided (especially when not possible by accident via a client, so a potential hacker bypassed a client-side convenience check and then the server-side input validation kicks in), etc. The downside of this thought would be to tend to "overload" the deliberately short list of convenience functions for log-levels (who knowns what will be next proposed convenience level?). On the upside one could argue that security should be ubiquitous enough in projects to find a good fit in most projects. One idea of how this "security event log" level could be handled special (like the error level is taking an error arg for example) could be to ease counting and grouping of such events from logs by having a clearer distinction from the other logs as being a potential security-event and possibly by taking a "type" string argument, which can be provided (freely) by the caller, enabling a grouping of certain security event types. Thereby derivations from usual patterns could easier be detected by tools specifically watching for those consistently-logged security-events in the overall logs. And when the language-idiomatic logging framework possibly gets/has a security-event kind of convenience logging level, integrating this output with intrusion detection or SIEM solutions parsing these logs would be easer. As said, just an idea to create room for further discussion on the pros and cons about this. Keep up the good work, and I really look forward to the new logging proposal being brought to life. |
Beta Was this translation helpful? Give feedback.
-
And on another perspective three more ideas related to logging that came into my mind right now, again, just to inspire discussion of the pros and cons: Correlation-ID for a RequestEspecially for distributed backend systems (with many microservices or asynchronous flows) it could possibly be helpful to propagate in the architecture a unique identifier for the request (like a UUID created on an API gateway) and having this appended/prefixed on every log created under that thread. Something like the Java-style Log MDC (Message Diagnostic Context), which was essentially a ThreadLocal kind of stack to add some prefix stuff that is tied to one particular request and taken in front of any log statement thereunder automatically. Using a middleware to create and push/pop this could be easily done in most http routing frameworks, allowing for easier analysis of single requests under heavy load. Possibly the Context part of this proposal can be used for this by the popular http frameworks, but I wonder how this ID might also propagate into developer-direct usages of this logging proposal automatically? (I have to read the spec proposal more on this, I admit) Ring-Buffer style of Debug-preserving Appending on Error-ConditionsThis idea is partly taken from the Java Util Logging, where a memory-based appender-like construct was used to solve one of the most annoying and sometimes pressing problems of not having enough detailed logs to analyze a certain production error condition (often leading into attempts to re-create these conditions on test systems to gather otherwise not available debug logs): The idea of a ring-buffer based logging would be to have a ring-buffer (of size n) in-memory, containing logs (each log entry is an entry in the ring-buffer) and it has two level-thresholds defined upon creation: Let's take for example "Info" and "Error" as the two level-thresholds of the ring-buffer:
This design allows to always have the last n detail log statements before a critical event (like error) occurred and now these n logs include the debug logs before the error condition occurred, containing exactly these developer-valuable information of what caused that error or led to this. Otherwise debug logs are not sent to the appender. This level-threshold based ring-buffer implementation (as an optional use of course) plays very well with the numeric-based ordering of even custom levels. This might be helpful in analyzing production issues, but on the downside might impact performance, as the debug log string is always created (but at least not always sent to appenders like a file etc., which happens only on errors or such high-threshold conditions). Also some overlapping in logs of timestamps need to be addressed, depending on the appending strategy used: If only the output of the ring-buffer (i.e. the elements removed at the end) are further logged (when conforming to the threshold of, say, "Info" or higher), then a hard crash would possibly loose these n not-yet logged elements. To avoid this, the appending strategy of appending higher-or-equal to the defined threshold (say "Info") statements directly upon entry into the ring-buffer can be used. But then, when on an error-condition the complete ring-buffer (including "Debug") gets logged, there might be duplicates of those. Secret Filter for Logs (PII, Credentials, Tokens, etc.)To avoid the (unfortunately often seen) problem of accidentally logging sensitive information like Personally Identifiable Information (PII) or credentials of backend systems or tokens or such, which are sometimes logged as part of error messages etc. by accident it would be helpful for a logging API to provide some kind of global hooks to plug-in any secret masking plugins. These could be created and configured by the individual dev teams, possibly taking regulatory requirements into account. Examples could be to redact email addresses, credit card numbers, session/token identifiers, cloud access keys, api-tokens, etc. based on a list of regex or better simpler filter conditions (as performance otherwise might be an issue). The idea would be to have an easy way of hooking this into the logging API and also some kind of reference implementation to optionally use for the most common use-cases defined above. Again, sorry for the long post, just my 2 cents about some issues I've had with different logging implementations |
Beta Was this translation helpful? Give feedback.
-
I wonder if it would be possible if the runtime/compiler could eliminate logging (basically make logging a zero cost operation) via dead code elimination in one shape or form ? I want to stress that what is important is not how you do it but just being able to do it. example:
|
Beta Was this translation helpful? Give feedback.
-
can this add support for integrate with opentelemetry tracing ? since it already support with zap, a simple custom wrapper will work. some purpose of do this:
typical usage: ctxWithSpan, newSpan := otel.Tracer("my-tracer-name").Start(ctx, spanName)
defer newSpan.End()
myCtxLog := slog.WithContext(ctxWithSpan)
myCtxLog.Info("I hope this log print a field named 'trace_id' and has the trace id of my new created span")
myCtxLog.Info("how to do it with slog?")
myCtxLog.Error("this is an error, I hope this also can mark this span as an error", errors.New("example error")) currently I have to do this, if I need the trace_id: const TraceIDKey = "trace_id"
type TracingHandler struct {
handler slog.Handler
}
func NewTracingHandler(h slog.Handler) *TracingHandler {
// avoid chains of TracingHandlers.
if lh, ok := h.(*TracingHandler); ok {
h = lh.Handler()
}
return &TracingHandler{h}
}
// Enabled implements Handler.Enabled by reporting whether
// level is at least as large as h's level.
func (h *TracingHandler) Enabled(level slog.Level) bool {
return h.handler.Enabled(level)
}
func traceID(ctx context.Context) string {
if span := trace.SpanContextFromContext(ctx); span.HasTraceID() {
return span.TraceID().String()
}
return ""
}
// Handle implements Handler.Handle.
func (h *TracingHandler) Handle(r slog.Record) error {
if traceID := traceID(r.Context); traceID != "" {
// r.AddAttrs will only affect current record, not the Logger's
// r.AddAttrs(slog.String(TraceIDKey, traceID))
h.handler = h.handler.WithAttrs([]slog.Attr{slog.String(TraceIDKey, traceID)})
// this also can only affect current record
if r.Level >= slog.LevelError {
span := trace.SpanFromContext(r.Context)
if span.IsRecording() {
span.SetStatus(codes.Error, r.Message)
}
}
}
return h.handler.Handle(r)
}
// WithAttrs implements Handler.WithAttrs.
func (h *TracingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
return NewTracingHandler(h.handler.WithAttrs(attrs))
}
// WithGroup implements Handler.WithGroup.
func (h *TracingHandler) WithGroup(name string) slog.Handler {
return NewTracingHandler(h.handler.WithGroup(name))
}
// Handler returns the Handler wrapped by h.
func (h *TracingHandler) Handler() slog.Handler {
return h.handler
} the problem with my TracingHandler: using
ctxWithSpan, newSpan := otel.Tracer("my-tracer-name").Start(ctx, "hello.Slog")
defer newSpan.End()
ctxLog := slog.With("foo", "bar").WithContext(ctxWithSpan)
ctxLog.Info("hello world")
ctxLog.With("foo", "bar").Error("have a nice day", io.ErrClosedPipe)
ctxLog.Error("have a nice day", io.ErrClosedPipe) |
Beta Was this translation helpful? Give feedback.
-
Hello, I have read the discussions but did not notice one important topic. Function For example if we have the following handler:
We wrote tests on it, and ran them func Test_handle(t *testing.T) {
for i := 0; i < 10 i++ {
handle(context.Background())
}
}
func Test_handle_fail(t *testing.T) {
handle(context.Background())
t.Fail()
} I expect to see error information in the
But I'll be disappointed to see:
The ability to set the default logger In my opinion it's not necessary to give an opportunity to set default value of logger, and the ability to get values from context should be taken out in separate packages, or even do it through creational pattern, for example like this: type contextKey struct{}
func NewLogManager(logger *slog.Logger) *LogManager {
return &LogManager{
defaultLogger: logger,
}
}
type LogManager struct {
defaultLogger *slog.Logger
}
// NewContext returns a context that contains the given Logger.// Use FromContext to retrieve the Logger.
func (m *LogManager) NewContext(ctx context.Context) context.Context {
return context.WithValue(ctx, contextKey{}, m.defaultLogger)
}
// FromContext returns the Logger stored in ctx by NewContext, or the default
// Logger if there is none.
func (m *LogManager) FromContext(ctx context.Context) *slog.Logger {
if l, ok := ctx.Value(contextKey{}).(*slog.Logger); ok {
return l
}
return m.defaultLogger
} The main problems with the default global logger (IMHO)
|
Beta Was this translation helpful? Give feedback.
-
I've been using the I'd like to output the Default Loggers output to stdout and a file. So far I've figured out I need to create new logger. It feels like I have to jump through many hoops to be able to just add another io.Writer. Then I have to figure out how to use TextHandler... Only to then attempt to give up and just copy the slog source code only to realize there's a whole lot of source code behind the default logging... Then I get worried if it's worth investing time into getting this muxed output to work or if I should just use the |
Beta Was this translation helpful? Give feedback.
-
@biohazduck, did you try passing an |
Beta Was this translation helpful? Give feedback.
-
As I noted in #54763 (reply in thread) a common and very useful pattern is to be able to log to multiple outputs, even more importantly it is helpful to give each output its own formatter. Looking at Python, they have a LogHandler for the output and a LogFormatter for generating the format. It would be useful to generate this same kind of functionality here; however, it seems like the LogHandler here conflates those two functions. Could we split those apart so we can get more versatility? |
Beta Was this translation helpful? Give feedback.
-
I've never shared my approach to logging, but I thought a lot about it. So it may be a good time to share it now. What are the key points I came to:
So in short it looks like
PS: Span start and span finish are two different events with the same ID (as like as any span.Printw). The span events are not stored in the memory. The full code is here: https://github.com/nikandfor/tlog |
Beta Was this translation helpful? Give feedback.
-
This discussion has led to a proposal and is now finished. Please comment on the proposal.
We would like to add structured logging with levels to the standard library. Structured logging is the ability to output logs with machine-readable structure, typically key-value pairs, in addition to a human-readable message. Structured logs can be parsed, filtered, searched and analyzed faster and more reliably than logs designed only for people to read. For many programs that aren't run directly by person, like servers, logging is the main way for developers to observe the detailed behavior of the system, and often the first place they go to debug it. Logs therefore tend to be voluminous, and the ability to search and filter them quickly is essential.
In theory, one can produce structured logs with any logging package:
In practice, this is too tedious and error-prone, so structured logging packages provide an API for expressing key-value pairs. This draft proposal contains such an API.
We also propose generalizing the logging "backend." The
log
package provides control only over theio.Writer
that logs are written to. In the new package (tentative name:log/slog
), every logger has a handler that can process a log event however it wishes. Although it is possible to have a structured logger with a fixed backend (for instance, zerolog outputs only JSON), having a flexible backend provides several benefits: programs can display the logs in a variety of formats, convert them to an RPC message for a network logging service, store them for later processing, and add to or modify the data.Lastly, we include levels in our design, in a way that accommodates both traditional named levels and logr-style verbosities.
Our goals are:
Ease of use. A survey of the existing logging packages shows that programmers want an API that is light on the page and easy to understand. This proposal adopts the most popular way to express key-value pairs, alternating keys and values.
High performance. The API has been designed to minimize allocation and locking. It provides an alternative to alternating keys and values that is more cumbersome but faster (similar to Zap's
Field
s).Integration with runtime tracing. The Go team is developing an improved runtime tracing system. Logs from this package will be incorporated seamlessly into those traces, giving developers the ability to correlate their program's actions with the behavior of the runtime.
What Does Success Look Like?
Go has many popular structured logging packages, all good at what they do. We do not expect developers to rewrite their existing third-party structured logging code en masse to use this new package. We expect existing logging packages to coexist with this one for the foreseeable future.
We have tried to provide an API that is pleasant enough to prefer to existing packages in new code, if only to avoid a dependency. (Some developers may find the runtime tracing integration compelling.) We also expect newcomers to Go to become familiar with this package before learning third-party packages, so they will naturally prefer it.
But more important than any traction gained by the "frontend" is the promise of a common "backend." An application with many dependencies may find that it has linked in many logging packages. If all of the packages support the handler interface we propose, then the application can create a single handler and install it once for each logging library to get consistent logging across all its dependencies. Since this happens in the application's main function, the benefits of a unified backend can be obtained with minimal code churn. We hope that this proposal's handlers will be implemented for all popular logging formats and network protocols, and that every common logging framework will provide a shim from their own backend to a handler. Then the Go logging community can work together to build high-quality backends that all can share.
Prior Work
The existing
log
package has been in the standard library since the release of Go 1 in March 2012. It provides formatted logging, but not structured logging or levels.Logrus, one of the first structured logging packages, showed how an API could add structure while preserving the formatted printing of the
log
package. It uses maps to hold key-value pairs, which is relatively inefficient.Zap grew out of Uber's frustration with the slow log times of their high-performance servers. It showed how a logger that avoided allocations could be very fast.
zerolog reduced allocations even further, but at the cost of reducing the flexibility of the logging backend.
All the above loggers include named levels along with key-value pairs. Logr and Google's own glog use integer verbosities instead of named levels, providing a more fine-grained approach to filtering high-detail logs.
Other popular logging packages are Go-kit's log, HashiCorp's hclog, and klog.
Overview of the Design
Here is a short program that uses some of the new API:
It begins by setting the default logger to one that writes log records in an easy-to-read format similar to logfmt . (There is also a built-in handler for JSON.)
The program then outputs three log messages augmented with key-value pairs. The first logs at the Info level, passing a single key-value pair along with the message. The second logs at the Error level, passing an
error
and a key-value pair.The third produces the same output as the second, but more efficiently. Functions like
Any
andInt
constructslog.Attr
values, which are key-value pairs that avoid memory allocation for some values.slog.Attr
is modeled onzap.Field
.The Design
Interaction Between Existing and New Behavior
The
slog
package works to ensure consistent output with thelog
package. Writing toslog
's default logger without setting a handler will write structured text tolog
's default logger. Once a handler is set, as in the example above, the defaultlog
logger will send its text output to the structured handler.Handlers
A
slog.Handler
describes the logging backend. It is defined as:The main method is
Handle
. It accepts aslog.Record
with the timestamp, message, level, caller source position, and key-value pairs of the log event. Each call to aLogger
output method, likeInfo
,Error
orLogAttrs
, creates aRecord
and invokes theHandle
method.The
Enabled
method is an optimization that can save effort if the log event should be discarded.Enabled
is called early, before any arguments are processed.The
With
method is called byLogger.With
, discussed below.The
slog
package provides two handlers, one for simple textual output and one for JSON. They are described in more detail below.The
Record
TypeThe
Record
passed to a handler exportsTime
,Message
andLevel
methods, as well as four methods for accessing the sequence ofAttr
s:Attrs() []Attr
returns a copy of theAttr
s as a slice.NumAttrs() int
returns the number ofAttr
s.Attr(int) Attr
returns the i'thAttr
.SetAttrs([]Attr)
replaces the sequence ofAttr
s with the given slice.This API allows an efficient implementation of the
Attr
sequence that avoids copying and minimizes allocation.SetAttrs
supports "middleware" handlers that want to alter theAttr
s, say by removing those that contain sensitive data.The
Attr
TypeThe
Attr
type efficiently represents a key-value pair. The key is a string. The value can be any type, butAttr
improves onany
by storing common types without allocating memory. In particular, integer types and strings, which account for the vast majority of values in log messages, do not require allocation. The default version ofAttr
uses packageunsafe
to store any value in three machine words. The version withoutunsafe
requires five.There are convenience functions for constructing
Attr
s with various value types:Int(k string, v int) Attr
Int64(k string, v int64) Attr
Uint64(k string, v uint64) Attr
Float64(k string, v float64) Attr
String(k, v string) Attr
Bool(k string, v bool) Attr
Duration(k string, v time.Duration) Attr
Time(k string, v time.Time) Attr
Any(k string, v any) Attr
The last of these dispatches on the type of
v
, using a more efficient representation ifAttr
supports it and falling back to anany
field inAttr
if not.The
Attr.Key
method returns the key. Extracting values from anAttr
is reminiscent ofreflect.Value
: there is aKind
method that returns an enum, and a variety of methods likeInt64() int64
andBool() bool
that return the value or panic if it is the wrong kind.Attr
also has anEqual
method, and anAppendValue
method that efficiently appends a string representation of the value to a[]byte
, in the manner of thestrconv.AppendX
functions.Loggers
A
Logger
consists of a handler and a list ofAttr
s. There is a default logger with no attributes whose handler writes to the defaultlog.Logger
, as explained above. Create aLogger
withNew
:To add attributes to a Logger, use
With
:The arguments are interpreted as alternating string keys and and arbitrary values, which are converted to
Attr
s.Attr
s can also be passed directly. Loggers are immutable, so this actually creates a new Logger with the additional attributes. To allow handlers to preprocess attributes, the new Logger’s handler is obtained by callingHandler.With
on the old one.You can obtain a logger's handler with
Logger.Handler
.The basic logging methods are
which logs a message at the given level with a list of attributes that are interpreted just as in
Logger.With
, and the more efficientThese functions first call
Handler.Enabled(level)
to see if they should proceed. If so, they create aRecord
with the current time, the given level and message, and a list of attributes that consists of the receiver's attributes followed by the argument attributes. They then pass theRecord
toHandler.Handle
.Each of these methods has an alternative form that takes a call depth, so other functions can wrap them and adjust the source line information.
There are four convenience methods for common levels:
They all call
Log
with the appropriate level.Error
first appendsAny("err", err)
to the attributes.There are no convenience methods for
LogAttrs
. We expect that most programmers will use the more convenient API; those few who need the extra speed will have to type more, or provide wrapper functions.All the methods described in this section are also names of top-level functions that call the corresponding method on the default logger.
Context Support
Passing a logger in a
context.Context
is a common practice and a good way to include dynamically scoped information in log messages. For instance, you could construct aLogger
with information from anhttp.Request
and pass it through the code that handles the request by adding it tor.Context()
.The
slog
package has two functions to support this pattern. One adds aLogger
to a context:As an example, an HTTP server might want to create a new
Logger
for each request. The logger would contain request-wide attributes and be stored in the context for the request:To retrieve a
Logger
from a context, callFromContext
:FromContext
returns the default logger if it can't find one in the context.Levels
A level is a positive integer, where lower numbers designate more severe or important log events. The
slog
package provides names for common levels, with gaps between the assigned numbers to accommodate other level schemes. (For example, Google Cloud Platform supports a Notice level between Info and Warn.)Some logging packages like glog and Logr use verbosities instead, where a verbosity of 0 corresponds to the Info level and higher values represent less important messages. To use a verbosity of
v
with this design, passslog.InfoLevel + v
toLog
orLogAttrs
.Provided Handlers
The
slog
package includes two handlers, which behave similarly except for their output format.TextHandler
emits attributes asKEY=VALUE
, andJSONHandler
writes line-delimited JSON objects. Both can be configured with the same options:The boolean
AddSource
option controls whether the file and line of the log call. It is false by default, because there is a small cost to extracting this information.The
LevelRef
option, of typeLevelRef
, provides control over the maximum level that the handler will output. For example, setting a handler's LevelRef to Info will suppress output at Debug and higher levels. ALevelRef
is a safely mutable pointer to a level, which makes it easy to dynamically and atomically change the logging level for an entire program.To provide fine control over output, the
ReplaceAttr
option is a function that both accepts and returns anAttr
. If present, it is called for every attribute in the log record, including the four built-in ones for time, message, level and (if AddSource is true) the source position.ReplaceAttr
can be used to change the default keys of the built-in attributes, convert types (for example, to replace atime.Time
with the integer seconds since the Unix epoch), sanitize personal information, or remove attributes from the output.Interoperating with Other Log Packages
As stated earlier, we expect that this package will interoperate with other log packages.
One way that could happen is for another package's frontend to send
slog.Record
s to aslog.Handler
. For instance, alogr.LogSink
implementation could construct aRecord
from a message and list of keys and values, and pass it to aHandler
. To facilitate that,slog
provides a way to constructRecord
s directly and add attributes to it:Another way for two log packages to work together is for the other package to wrap its backend as a
slog.Handler
, so users could write code with theslog
package's API but connect the results to an existinglogr.LogSink
, for example. This involves writing aslog.Handler
that wraps the other logger's backend. Doing so doesn't seem to require any additional support from this package.Acknowledgements
Ian Cottrell's ideas about high-performance observability, captured in the
golang.org/x/exp/event
package, informed a great deal of the design and implementation of this proposal.Seth Vargo’s ideas on logging were a source of motivation and inspiration. His comments on an earlier draft helped improve the proposal.
Michael Knyszek explained how logging could work with runtime tracing.
Tim Hockin helped us understand logr's design choices, which led to significant improvements.
Abhinav Gupta helped me understand Zap in depth, which informed the design.
Russ Cox provided valuable feedback and helped shape the final design.
Appendix: API
Beta Was this translation helpful? Give feedback.
All reactions