Hacker News new | past | comments | ask | show | jobs | submit | smueller1234's comments login

It's actually quite likely something else (unless it's just an excuse to reap short term savings): in a company large enough, deciding to shift staffing from one area to another is hard. As an executive with thousands of staff, you can tell your management team to each cough up a certain number or (somewhat) suitably qualified people. But again, if large enough, incentives diverge, so you don't necessarily end up with the top talent you thought you needed for your big new thing.

An "easy" solution is to do a layoff, then open roles elsewhere, allowing for selection.

It's commonly practiced across the large companies in the industry.

(Not speaking for my employer)


Multiple types of TPUs.

(I work for Google, but the above is public information.)


Former Perl language contributor here. A sibling comment to this already pointed out that you must use strict mode with Perl to retain your well being.

The two languages certainly both have their terrible warts. I think in the implicit conversion gotchas, JS is actually markedly worse. Perl has polymorphic values, but somewhat typed (for its basic types) operators (eg "eq" for strings, "==" for integers). JS has both implicit value type conversions and overloaded operators. That leads to an unholy level of indeterministic mess.


Many moons ago, I made a case for making strict mode the default in Perl. We settled on the current backwards compatibility compromise, which is that breaking changes are hidden behind a minimum version toggle:

Eg. putting "use v5.14.0;" or similar on top of your file (or compilation unit/scope) will indeed turn on strict mode for you, along with adding a number of features as well.

At the time, also auto-toggling warnings was considered unacceptable because technically, using the warnings pragma anywhere had some edge case action at a distance. This has been remedied in some later release after I wasn't involved in the language development anymore, and from some more recent version, warnings are also part of the standard import.

I imagine you (TheDauthi) already know that, though.


It's not an idle use case by the way. mhx wrote and maintains a library that provides important backwards compatibility for native (typically C based) extensions for Perl across decades of language releases.


No real doubt here this guy isn't 100% serious, so thanks for the explanation.


See my response to a sibling of the comment you're responding to. The library had shocking code quality issues. It's unlikely that they're all peachy now.

The other side of this is that while Pieter's writing was marketing genius, it was also woefully understating the complexity of any practical use case. The way I tried to summarize that to folks who were keen to try zeromq then was that they should start at the back of the book with the most complex example, and that's by far the simplest setup that they could hope to end up with once they start thinking about putting something into production. And everything leading up to that - a book no less - was exclusively educational/toy use cases.


Zeromq will have changed a lot since then, but some time in the 2010s, I prototyped a system using it (which was going to be a major production system in a large tech company) and had weird unexpected blocking issues with it. To debug, I sat down to read a bunch of the zeromq code, just to realize that it was using assert() to handle wire protocol errors (unrelated to the blocking bug).

I've never dropped a piece of software as quickly as that.


More or less my experience as well. Asserting on bad user configuration, asserting on OS errors that weren't in a particular list. I followed their recommendation of having a "small, simple, reliable" broker and it kept crashing on asserts in the library at the worst times.


"Before I built a wall I’d ask to know What I was walling in or walling out, And to whom I was like to give offense."

Only poem I can cite by heart a quarter of a century after spending time with it in school.


I haven't ever used a 3d printer. But your comment made me realize that if PrusaSlicer is based slic3r, it's actually also using software that I wrote many, many years ago.

That's another side of open source: if you don't rely on it to make a living (though it did help in getting my first job as a developer!), there's that pure joy in seeing your software get picked up and used by others. This little discovery made my day.


Awesome story! Yes, indeed PrusaSlicer is based on Slic3r :)


Apologies for nitpicking, but n^m doesn't mean m loops (ie n^1000 dient mean 1000 passes): that would be mn (1000n in your example). I think your intuition argument kind of breaks down there.


I meant to say 1000 nested loops. You're right to call me out on it though, as that is a huge difference in meaning.

I still think it would be bizarre to have to nest a loop n layers deep, where n-1 is not enough, but n layers is sufficient for really large n. Like what extra info would you get on the 1000th nested loop that you didn't on the first 999.

Of course there is nothing formal about this, it just feels like it would be wrong to me (and hence personally i would consider it the most interesting result). Of course gut feelings dont really count for much and i have nothing more than that.

I suppose my intuition is that increasing the exponent gives diminishing returns to how much more power you really get , so it doesn't make sense for problems to be in the n^1000 range. My gut feeling is they should either be easier or harder. I certainly can't think of very many non-exponential algorithms in the > n^50 range.


Not 1000 iterations of 1 loop, but 1000 loops nested inside each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: