Hacker Newsnew | past | comments | ask | show | jobs | submit | serverlessmom's commentslogin

Here's an exception to these general rules (first names appearing in surnames) is Peterman: you'd think it was some kind of relation to a relative named peter, but it is actually a name for a profession. A Peterman was someone tasked with finding deposits of saltpeter for the production of fertlizer and gunpower.

A partial documentation is on Wikipedia: https://en.wikipedia.org/wiki/Saltpetre_works

A better narrative of this industry is in Ed Conway's book "Material World" https://edconway.substack.com/p/welcome-to-the-material-worl...


Both 'peter' as in the name and peter as in "saltpeter" likely have the same etymological origins meaning something like "rock" or "stone".


Yes, petrus is old latin or greek for rock or stone, Saint Peter was given the name for "the rock" on which the church will be built.


“Then in the distance, I heard the bulls. I began running as fast as I could. Fortunately, I was wearing my Italian Cap Toe Oxfords." - J. Peterman


lol that was me, finally upgraded! love the service!


Thanks so much!


Yeah. My dad had told me more than one horror story of early tech startups buying truckloads of hardware to scale way beyond demand growth. And I remember when getting on Slashdot meant your service would inevitably go down.

Of course, given stable demand and known requirements, bare metal can be a great option. But it’s not strictly better than public cloud hosting.

I think it’s just been long enough that people have forgotten the limitations of bare metal engineering.


> I think it’s just been long enough that people have forgotten the limitations of bare metal engineering.

Not just engineering, but deploying it too. I'm building a whole business around deploying it based at least partially on the fact that it has been forgotten. I've just been doing it long enough that I remember how to do it and it isn't getting any easier as the need for compute grows into even more complex and powerful hardware.


Slightly longer: having many versions of the same SDK required added to startup time.


It’s a 50% reduction in startup time, and each “run” for a pod is fairly quick.


It's surprising how much asset re-use doesn't happen due to small technical hurdles e.g. file incompatibility. Interestingly: for an experienced artist, having a whole library of assets to base your new assets on saves a ton of time, and is almost as fast as asset re-use.


oooooh I love this, I was writing against mocks a few months ago, and love to see others talking about more sophisticated testing methods: https://www.signadot.com/blog/why-developers-shouldnt-write-...


Something I mention any time I'm introducing OpenTelemetry is that it's an unfinished project, a huge piece being the unifying abstractions between those signals.

In part this is a very practical decision: most people already have pretty good tools for their logs, and have struggled to get tracing working. So it's better to work on tools for measuring and sending traces, and just let people export their current log stream via the OpenTelemetry collector.

Notably the OTel docs acknowledge this mismatch between current implementation and design goals: https://opentelemetry.io/docs/specs/otel/logs/#limitations-o...


I think that a number of Observability providers are looking at how they can add features and value to parts of monitoring that OTel effectively commoditizes. Thinking of the tail-based sampling implemented at Honeycomb for APM, or synthetic monitoring by my own team at Checkly.

"In 2015 Armin and I built a spec for Distributed Tracing. Its not a hard problem, it just requires an immense amount of coordination and effort." This to me feels like a nice glass of orange juice after brushing my teeth. The spec on DT is very easy, but the implementation is very very hard. The fact that OTel has nurtured a vast array of libraries to aid in context propagation is a huge acheivement, and saying 'This would all work fine if everyone everywhere adopted Sentry' is... laughable.

Totally outside the O11y space, OTel context propagation is an intensely useful feature because of how widespread it is. See Signadot implementing their smart test routing with OpenTelemetry: https://www.signadot.com/blog/scaling-environments-with-open...


An argument that OpenTelemetry is somehow 'too big' is an example of motivated reasoning. I can understand that A Guy Who Makes Money If You Use Sentry dislikes that people are using OTel libraries to solve similar problems.

Context propagation and distributed tracing are cool OTel features! But they are not the only thing OTel should be doing. OpenTelemetry instrumentation libraries can do a lot on their own, a friend of mine made massive savings in compute efficiency with the NodeJS OTel library: https://www.checklyhq.com/blog/coralogix-and-opentelemetry-o...


Author here.

OpenTelemetry is not competitive to us (it doesn’t do what we do in plurality), and we specifically want to see the open tracing goals succeed.

I was pretty clear about that in the post though.


I think that it's disingenuous to say OpenTelemetry and Sentry aren't in competition. I think it would be good news for Sentry if DT were split from the project, and instrumentation and performance monitoring weren't commoditized by broad adoption of those parts of the OpenTelemetry project.

I think you, the author, stand to benefit directly from a breakup of OpenTelemetry, and a refusal to acknowledge your own bias is problematic when your piece starts with a request to 'look objectively.'


We just rewrote our most heavily used SDK to run on top of OTel. What do we gain from it failing?

We also make most of our revenue from errors which don’t have an open protocol implementation outside of our own.


Your error stuff is pretty damn cool btw.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: