"It might help to go over a non-exhaustive list of things the offical SDK handles that our little learning library doesn’t:
- Buffer and batch outgoing telemetry data in a more efficient format. Don’t send one-span-per-http request in production. Your vendor will want to have words."
- Gracefully handle errors, wrap this library around your core functionality at your own peril"
Maybe the confusion here is in comparing different things.
The InfluxData docs you're linking to are similar to Observability vendor docs, which do indeed amount to "here's the endpoint, plug it in here, add this API key, tada".
But OpenTelemetry isn't an observability vendor. You can send to an OpenTelemetry Collector (and the act of sending is simple), but you also need to stand that thing up and run it yourself. There's a lot of good reasons to do that, but if you don't need to run infrastructure right now then it's a lot simpler to just send directly to a backend.
Would it be more helpful if the docs on OTel spelled this out more clearly?
The problem is ecosystem wide - the documentation starts at 8/10 and is written for observability nerds where easy things are hard, and hard things are slightly harder.
I understand the role that all the different parts of OTel plays in the ecosystem vs InfluxDB, but if you pay attention to that documentation page, it starts off with the easiest thing (here's how you manually send one metric), and then ramps up the capabilities and functionality from here. OTel docs slam you straight into "here's a complete observaility stack for logs, metrics, and traces for your whole k8s deployment".
However, since OTel is not a backend, there's no pluggable endpoint + API key you can just start sending to. Since you were comparing the relative difficulties of sending data to a backend, that's why I responded in kind.
I do agree that it's more complicated, there's no argument there. And the docs have a very long way to go to highlight easier ways to do things and ramp up in complexity. There's also a lot more to document since OTel is for a wider audience of people, many of whom have different priorities. A group not talked about much in this thread is ops folks who are more concerned with getting a base level of instrumentation across a fleet of services, normalizing that data centrally, pulling in from external sources, and making sure all the right keys for common fields are named the right way. OTel has robust tools for (and must document) these use cases as well. And since most of us who work on it do so in spare time, or a part-time capacity at work, it's difficult to cover it all.
First time it takes 5 minutes to setup locally, from then on you just run the command in a separate terminal tab (or Docker container, they have an image too).
My monitor endgame, as unrealistic as it is, is '5k' 120hz 34" ultrawide. There's some 4k ones out there that might suffice, but 2880px tall is my goal.
Does Microsoft make money from consumer gaming OS sales? Is there an onroad from that market to others -does there enterprise sales rely on gamers getting jobs and demanding microsoft? I think the author vastly overestimates how much of a market this actually is.
I think they make a bunch of money having Windows being the default installed OS on prebuilt PCs and laptops. Gaming PCs and laptops are a pretty large market included in that. There's a chance vendors/builders might start to sell cheaper options which don't include Windows installed, savings for both them and the customers.
Will have to see if that actually happens though, even as a power user myself there are still a bunch of pain points with SteamOS/steam deck that are harder to deal with than similar issues in Windows.
You don't always make money because you sell to a specific customer. Sometimes it's about support and network effect. Devices (Scanners, Printers, ...) are compatible with Windows and sometimes only Windows because that's what people use, NVIDIA drivers used to be Windows only because that's where customers were. Does Microsoft make money from these sales? No. Do they make money from having an ecosystem that everyone is supporting? Most definitely.
They make money from lots of random stuff that they advertise in Windows.
The Edge browser for instance, which pushes Bing.
Also, office applications like Word, OneNote, and OneDrive.
They’ve got an app store as well, which is not much used, though probably does bring them some money.
I think they show ads by default in the start menu. Pretty sure it even comes with certain third party applications, like Netflix, which they are no doubt paid for.
At the (current) expense of strapping something big and bulky to my face.
After a long day of wearing glasses sometimes they fatigue me out slightly, just having to wear something on my face. I can't imagine what that would be like strapping a VR headset on.
Exactly. So a productive employee is one who identifies a problem and knows which of the following to do:
- fix it themselves without anyone asking
- bring it up to higher management
- deprioritize it based on severity and leadership initiatives
This is the pattern taught by Jethro to Moses in the Old Testament:
every great matter they shall bring unto thee, but every small matter they shall judge: so shall it be easier for thyself, and they shall bear the burden with thee.
But it can’t function any other way. You are a filter of small problems for everyone in the org higher than you. If you bring up everything up the chain your level may as well not exist.
There are many places out there where individual contributors are agency-less executors.
I don't think it ever worked. (Remember when Japan destroyed the world's car industry just by changing that single thing? And that's industry work, highly repetitive and formalized.) But that never stopped managers from doing something.
What I find odd, is often the biggest proponents of the free market - with it's decentralised decision making process ( and redundancy of effort ), decide that total centralized, top down control is the best way to run their own company - as they think top down decisions and minimising waste through duplication is the way to go.
That's a slightly different angle - if I understand correctly it's about why firms exist at all - and if I were to summarise it's because lack of trust costs - a firms boundaries are created to balance transactional costs ( within the firm it's low, between firms or individuals it can dominate ) versus the costs of perhaps not being market efficient.
My argument above is about not what shapes a firm's boundary but how it operates internally - too much top down control potentially risks exacerbating the risks associated with being a company and also potentially increases internal transactional costs as well - worse of both worlds! - all that ceremony around decision making, time spent justifying existences, inability to just act.
Obviously as I said above - it's a balance - just as it is with country/international systems.
> In a “free market” you can choose what firms you work for and with, except the government.
Eh? Nobody ever leaves one country for another?
> The government would be more efficient in an autocratic leadership. But government efficiency is not the societies efficiency and well being
I think the free market proponents would say otherwise - one key problem is the myth of perfect information - you are imagining it's possible to concentrate all the required information to make any correct decision into a very small group of people at the top ( and assuming these people are competent and not corrupt). One of the ideas behind why distributed markets work is the information about what is needed is communicated by the mechanisms of the market itself.
And to bring it back to the original post - does the CEO have all the information required to direct others to meet the customers need across all areas - or is it better to use the collective intelligence of the entire organisation?
And in the real world, not even this is true, due to the cost of satisfying OKRs--opportunity cost is one kind of cost, but optimizing an OKR can of course negatively impact other valuable things that aren't represented by OKRs.
The second diff is actually much clearer at what’s happening.
This is why it irritates me when people forget the trailing comma. It’s not a problem in the moment. But I know it will be a small but avoidable cognitive burden on the next PR (wait, why did "y" changed ? Oh, right, it didn’t).
Difftastic would not solve the issue described by madeofpalk because it still highlights the added comma. You need a diff tool that can distinguish between optional and required syntax. So far I am not aware of any tool that supports this, except the one I am working on (SemanticDiff).
Certainly the idea has been suggested many times. I think people end up formatting both before/after and doing a diff on formatted before against formatted after. I've done that.
But diff does not exists in a vacuum. It would need to be integrated to IDEs, editor, merge tools, PR tools. You now have to have `patch` understand and depend on the details of the syntax of your language. In a way that may break between different versions of diff and patch. Not even starting with variants of a language, different interpretations of how to handle the preprocessor/macros system, different editions of the same language.
All that, just to not have to add a trailing comma ?
Most things like patch would still use line diff. I think I would want this semantic diff in the merge request review and the logs. It could have definitions distributed with the syntax highlighters in language plugins in an editor/IDE, git CLI, and git forge.
> ²From the global launch date of HMD Key.
So if you buy it in 12 months time it'll only recieve 1 year of security updates.
How long do you reckon this will be on the market for?
reply