Hacker News new | past | comments | ask | show | jobs | submit | anton_gogolev's comments login

That’s not called “deleting old code”. This is now “breaking backwards compatibility”.


So you're not welcoming guests with Android phones? Will you be placing their mobile phones in a Faraday cage?

Or will not not be visiting anyone who has a Homepod/Echo at home?


Placing their phones in a Faraday cage sounds like a good idea. But I will probably just have to watch what I say when company is over.


> The Law of Leaky Abstractions is a lie; abstract airtightly

This one... Our entire computing world is a pie of terrible abstractions held together by duct-tape and billions of man-hours spent working fixing those leaks.


This one is wrong. The Law of Leaky Abstractions is in fact not a lie. All abstractions actually do leak.

People take that as an excuse to be sloppy, which it isn't. You should do the best you can to make non-leaky abstractions. When you have done so, you should look for the leaks, and do your best to plug them. After that, you should do so again. You should really try to make it as solid as you can.

But it will still leak. It will be less garbage than much of what we have to build on, because of the effort you put into making it better. But it will still leak.


I agree. I much prefer the more general “All models are wrong, but some are useful" to the “Law”.


> ...the better your code style, guidelines, linting, infrastructure, error reporting and tests

Doesn't TypeScript kind of solves (at least partially) those issues?


"We have problems, let’s introduce some code style guidelines, linting, error reporting and most importantly tests!" "Nah, let's just switch to Typescript, it will solve all those problems. At least partially."


I mean, look at the number of tests needed to replace a few lines of types. You have to check the behavior of your functions for all kinds of invalid inputs. With typescript you still need tests but less of them. And also you can get rid of a lot of input validation in your production code.


It’s not either/or. Nice strawman you put up here.


No I didn’t. The statement was that Typescript solves the listed issues, which is obviously not true. There may be of course discussion whether TS _helps_ to solve such issues but you won’t be able to convince me that TS _solves_ e.g. lack of testing.


Keeping track is one thing. Actually _running_ the Lernaean Hydra of an application in production is a whole another story. The amount of "housekeeping" you have to do to keep the thing afloat is astounding: cascading failures, distributing tracing, logging and diagnostics, metrics. Even operational side of things require a lot of attention. Presumably, each microservice would require at least a minimal level of admin-level tooling around it.


Logging and monitoring is part of the lifecycle. Use strict automated conventions to aid developer teams. Always opt for convention before configuration is our tooling motto! :)

Log shipping is what we do from thousands of servers already (you should at least!), adding a shipper for a few 100 containers on a set of hosts is no big deal.

Fluent(d/bit) -> some kind of elastic? There are a few resonable patterns available that works and scales pretty well.

Failures and issues with the actual code - well I might have been lucky... DDD with somewhat senior devs where no spaghetti action takes place. The tooling we keep usually seem to pinpoint issues fairly well.

We’re on the scale of roughly 40 devs and my team of 3 support them with tooling that handles service lifecycle and operational stuff.

It let’s us be pretty fluent with what and how teams build and iterate stuff. I guess it requires a certain scale and experience though.


You can experience the same, if not worse, levels of pain and suffering with a monolithic application designed by the same Enterprise Architects.


This is some Java-level explicitness. Maybe, drop the `_seconds` suffix and replace it with something like `retry_window_after_first_call = '60s'`?


I agree it might look a bit excessive, but it greatly cuts down on complexity and possible unexpected behavior.

If you accept '60s' as 60 seconds, then the next questions are:

- Will it also accept '60000ms'?

- What does it do if I pass an int or float anyway? Will it implicitly take it to be seconds?

- Will it throw an exception if I use an unexpected value? If so, which exceptions can I expect? Will it just chug along and use a default?


Something I picked up from another developer is to use timedelta for these type of parameters. It avoids hoping that you get the right granularity for everyone.

   retry_window_after_first_call=timedelta(minutes=3.5)
   retry_window_after_first_call=timedelta(days=2)
   retry_window_after_first_call=timedelta(minutes=50)


Nice! Yes this is the better option. IMO, should just be

  retry_window=timedelta(...)


While the idea of '60s' is interesting, in lieu of that I always err on the side of explicitly specifying the time units when the argument provided is just a number. Not doing so has bitten me too many times.


60s. was wondering how we could add physical units to python? highjack the number literals and variables like this: 1.s, pi.radians? or maybe the way golang does it: 1 * time.seconds?

would be nice to have


I also found this package: https://github.com/sbyrnes321/numericalunits

Which uses the interesting approach of setting all "units" to random floats: A complete set of independent base units (meters, kilograms, seconds, coulombs, kelvins) are defined as randomly-chosen positive floating-point numbers. All other units and constants are defined in terms of those. In a dimensionally-correct calculation, the units all cancel out, so the final answer is deterministic, not random. In a dimensionally-incorrect calculations, there will be random factors causing a randomly-varying final answer.

Which presumably is done to avoid overhead from carrying around units with each number, but forces you to run calculations multiple times (with different random seeds) to verify the result is correct.



Sounds like a fun hack, I encourage you to give it a try and then never EVER use it in real code :P


Not to undermine author’s efforts, but I feel we as an industry took a wrong turn somewhere. An entire book on what essentially is a single Git command is just insane.


> I feel we as an industry took a wrong turn somewhere

Git is a powerful yet complex (and sometimes confusing) tool. But imho people that are using Git every single day and yet not willing to invest some time to learn the fundamentals properly (because it's "just" a VCS) is the real issue here. It's especially difficult for people coming from other VCSs (like SVN or Mercurial). I was there once. I would probably still have the same attitude, if I didn't win a 2-day Git workshop a few years ago, which completely changed my mental model and made "advanced features" appear quite simple.

This fact (people learning commit/push/pull and moving on) creates the market for such books. "You don't want to spend 2 days on (re-)learning Git? Here's one major feature explained that you should know".


You could be saying one of two things here:

1. "The average dev these days is so terribly uninformed, they're willing to buy an entire book to explain what, to me, is a simple concept. I feel like devs these days aren't as good."

Or:

2. "Our tools these days are so complex that the average dev needs an entire book just to understand a single command. I feel like our tools are too complicated."

You might want to work on being clearer about what you mean in the future, or else risk coming off as rude. I thought you meant #1 when I first read your statement, but, after rereading your statement, I'll give you the benefit of the doubt and assume you meant #2.

To respond to point #2: imagine explaining a complex bash command to someone. If they mostly used bash to change directories and list files, you might need to explain pipes, buffers, and the unix philosophy before they'd really understand it. As someone in this thread mentioned, they mostly use git for push/pull/merge and that's not uncommon. Most devs need more background about how git works behind the scenes in order to really get the benefits of something like git rebase. I happen to be one of those devs and I'm glad a book like this exists.


Hey, Pascal here, the author.

I understand this reaction. And just like you've pointed out, it's actually not a whole" book about a single command.

In order to get a good understanding of rebasing, I believe it's good to have a solid foundation of how git works, which is pretty much what the first half of the book covers.

I could've left that out and only talk about rebasing without going into all the other topics but then, for someone who isn't experienced with the internals of git will have a hard time following what's going on.


> ...weird design choices (e.g. branching is bad)

What bothers you in particular?


> There's nothing wrong with having zillions of them...

There's nothing wrong until something goes wrong an now you're royally screwed. With zillion dependencies you are at a mercy of zillion maintainers, and none of them has any obligation to you. They can break backwards compatibility in patch releases, introduce subtle behavior changes, steer the project in an unexpected direction or abandon it altogether.


I’m a bit torn on this. I have most of my experience in the .NET ecosystem, where dependencies are a lot more manageable. However, if something breaks, you’re screwed a lot harder, because it’s not so easy to replace a large library, and there are very likely fewer well-maintained alternatives than there would be on NPM.

In total, I find it hard to deny how productive the NPM ecosystem can be, despite my philosophical objections to the way the community is run. Am I crazy here?


You aren't alone in this. The Node/NPM/JS scene is churning out code and innovations like there's no tomorrow, that's something to admire.

What I feel they are missing is a community process to consolidate things. You don't need three generations of ten incompatible solutions for a given problem - after some iterations, things should consolidate into one or two more or less standardized libs that don't break existing code at every damn point release.


> You aren't alone in this. The Node/NPM/JS scene is churning out code and innovations like there's no tomorrow, that's something to admire.

I don't find churning out code admirable, and I also don't think I've seen any true innovation come out of the NPM scene (bar innovation in the browser/JS space itself, which I think isn't a good measure as it's mostly just working around limitations that shouldn't be there in the first place).


That goes into the direction of my thinking. I am concerned about transitive security issues. It is impossible to check in node dependencies into version control (size/binaries). They have a lock file to pin versions, but dependencies that are downloaded upon each build are are not reproducible from my point of view. With Go, it’s easy to vendor and check in, it’s also straight forward to review them. There have been examples of targeted attacks using npm packages and that is something I am very concerned about.

People move billions with a node.js application we develop and the company will eventually be liable if the system is compromised through a targeted attack.

On a different note, I think the ecosystem moves too fast, packages and versions are getting deprecated and new ones getting released constantly. I have the feeling that the whole ecosystem is targeted towards building small MVP apps, not relying a long-term business on it. Maybe I am too harsh here, but that is a frustration growing for years now. I am happy to be proven wrong.


Not a huge fan of node or anything but npm lock files do pin to a hash. Also in commercial world you're going to be pulling through nexus or some other cache to reduce bandwidth use and developer downtime.

Are there other reproducibility concerns I should be worrying about? Are you thinking npm modules with native code or that (this does happen!) actively pull other stuff during build? Most of those do their own pinning but agree the whole thing is messy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: