Logging is the lowest of all debugging utilities - its the first thing you ever do writing software - “hello world”. And, while I admire structural logging, the truth is printing strings remains (truly) the lowest common denominator across software developers.
I often see confusion on how to interpolate variables - let alone zero standardization on field formatting, and poorly managed logging infra, too.
Adding another barrier to entry (protobufs) isn’t going to drive better usage. Any solution I could prescribe would inevitably chop off the long tail of SWE (raise the common denominator), and I think that’s going to be quite an unpopular position to advance in any established company.
To be clear: structure is good, our expectations of how to structurally log are too low, and introducing a build step to compile PB to native, and then dealing with a poorly generated (my opinion) API to log, sounds like a miserable experience.
Yes, or any other data format and/or transport protocol.
I'm surprised this is up for debate.
> Logging is the lowest of all debugging utilities - its the first thing you ever do writing software - “hello world”. And, while I admire structural logging, the truth is printing strings remains (truly) the lowest common denominator across software developers.
This sort of comment is terribly miopic. You can have a logging API, and then configure your logging to transport the events anywhere, any way. This is a terribly basic feature and requirement, and one that comes out of the box with some systems. Check how SLF4J[1] is pervasive in Java, and how any SLF4J implementation offers logging to stdout or a local file as a very specific and basic usecase.
It turns out that nowadays most developers write software that runs on many computers that aren't stashed over or under their desks, and thus they need efficient and convenient ways to check what's happening either in a node or in all deployments.
What I found was that it's typically not the binary encoding vs string encoding that makes a difference. The biggest factors are "is there a predefined schema", "is there a precompiler that will generate code for this schema", and "what is the complexity of the output format". With that in mind, if you are dealing with chaotic semi-structured data, JSON is pretty good, and actually faster than some binary encodings:
Yes, I have (and currently do) use slf4j. It’s not a panacea. Maybe you also write software that services millions of MAUs with engineering teams spread across the world - and seen immensely different results. That’d be a breath of fresh air.
However, as a stickler for quality, I find myself on the wrong end of this discussion in practice. I insist, logging is truly the LCD of software development. Introducing unergonomic data structures generated from protobufs strikes me as something that the long tail would ignore, and instead simply reach for something simpler to instead log `here`, `here1`, etc.
Logging is the lowest of all debugging utilities - its the first thing you ever do writing software - “hello world”. And, while I admire structural logging, the truth is printing strings remains (truly) the lowest common denominator across software developers.
I often see confusion on how to interpolate variables - let alone zero standardization on field formatting, and poorly managed logging infra, too.
Adding another barrier to entry (protobufs) isn’t going to drive better usage. Any solution I could prescribe would inevitably chop off the long tail of SWE (raise the common denominator), and I think that’s going to be quite an unpopular position to advance in any established company.
To be clear: structure is good, our expectations of how to structurally log are too low, and introducing a build step to compile PB to native, and then dealing with a poorly generated (my opinion) API to log, sounds like a miserable experience.