Hacker News new | past | comments | ask | show | jobs | submit | donflamenco's comments login

I would recommend the full video on Youtube. https://www.youtube.com/watch?v=cEg8cOx7UZk

Another fascinating bit is when he describes the flat management structure NVDA has.

He, as CEO, has 58 direct reports and no scheduled one on ones. Feedback is given constantly (up and down.).


No 1:1s sounds miserable. I don't want to give or receive feedback "constantly", I have other things to do constantly. I want to have a set periodic time where I know I will not be interrupted by my manager "sharing feedback", and will not be interrupting them with my feedback.

I suspect that in practice, relatively little feedback is being shared either up or down within such a structure, as people are busy working and never hit their set time reminding them to think through what feedback they have.


1:1s are okay when you're an IC and have it with your manager. The problem is the flipside is when your manager has many ICs, then it doesn't scale properly. Imagine Jensen having 58 reoccurring 1-1s!

In fact I see this with my coworkers, especially PMs, who have their schedules full of 1-1s daily. No doubt there's useful work that gets done in them _sometimes_, but I have doubts its efficiency over the long term with cross functionals. But even just focusing on 1-1s between managers/reports, I'd prefer nixing them in favor of a flatter structure, and using office hours/one-offs when needed.


Yes, but this is (IMO) just one of the important reasons that one manager should not have 58 direct reports.

Now, maybe this is fine for someone who manages other executives, where the primary touch points can be, like quarterly budget and strategy reports or something (I honestly have no idea what people like this do day to day, so I'm totally guessing on this...).

But out near the "leaves" of an org, managers exist to support their reports who are doing work day to day, and they can't do that if they have too many reports to spare any time for 1:1s with them.

I think PMs having a bunch of 1:1s, likely comprised mostly of status updates that could be done asynchronously, is a different problem, that we likely agree about.

But managers should have few enough reports that they're able to support them. If their reports want to cancel their 1:1s because they don't have anything to discuss, that's fine, but being unable to get on the schedule regularly is a problem, IMO.


There are almost certainly unwritten reporting lines at nvidia. No one can be an effective manager with 58 direct reports.


Yep, exactly.


> Feedback is given constantly (up and down.).

If you don't make time for things they rarely happen unless the people are particularly fired up about them. I don't even know who my current manager is to even reach out to. My coworkers don't unit test until reminded on the PR. I honestly forget to smoke-test until called out on it. So unless your culture is about feedback and everyone truly embodies that and is on board, it's not gonna happen.


Some of these seem a bit far-fetched and out of the norm. Not knowing your manager - do you even know what team you're on?

Engineering culture dictates much more strongly regarding unit and other tests than constant human feedback. It's also easy to add automated lint coverage tests to your PRs, and creating a documented process to check whether smoke-tests, etc.


I know my team and project manager. I was sent the contact of my on-boarding buddy who is on my team. In a matrix organization, it's common to not have any day-to-day contact with your manager. In my current organization, I have no idea who they are. I'm sure if I signed into the HR help portal and looked up by profile it'd say who is my manager, but I've had no interaction with them. Just got an email containing my year-end evaluation, no conversation. Maybe there name was in the email, maybe it was just the app that sent it.


It hasn't been for years now. The systems get a unique password from the factory and is on the tag.


The guy you are replying to was employee 2 or 3 depending on how you count at Amazon.

There was a ton of various scripts written in perl that kept everything running for sure. The website code was C/C++ as mentioned elsewhere.


Website was written in C for NSAPI. obidos was the name. If you see current URLs that contain /gp/, that is when they transitioned to java backend. GP stands for Garupa.


Technically, it was C++ limited to a tiny subset of C++ features. Shel wasn't so keen on all that fancy C++ stuff but thought there was a role for "objects".


I believe it's actually called Gurupa, and named after the island on the Amazon just downriver of the delta. The only place I know that refers to the area as Garupa is Selfridge's survey at https://www.history.navy.mil/research/library/online-reading... — and even so inconsistently.


He is wrong and is corrected later on in that tweet. At that time, AMZN was mostly DEC Digital Unix. The DNS and mail servers were Linux in 97. AMZN started with SUNW (pre 97), but switched to Digital Unix because it was 64bit and could fit the catalog into RAM.


This tweet:

Peter Vosshall https://mobile.twitter.com/PeterVosshall/status/134769756024...

No. The entire fleet was Compaq/Digital Tru64 Alpha servers in 2000 (and '99, and '98). Amazon did use Sun servers in the earliest days but a bad experience with Sun support caused us to switch vendors.

So, the title is wrong.


Well, like every internet company in 99, there was SUN servers. There was a lone sun workstation that printed some of the shipping docs in latex. I believe that was left by Paul Barton Davis. By early 97, the website (Netscape) and database (Oracle) ran on DEC Alpha hardware. Peter is wrong about switching to Digital Unix because Sun had bad support. The switch happened for 64bit reasons.

There was almost a 24 hour outage of amazon.com because Digital Unix's AdvFS kept eating the oracle db files. Lots of crappy operating systems in the those days.


I worked at a company that thought they had bought their way to reliability with Sun, Oracle, and NetApp but we had a three-day-long outage when some internal NetApp kernel thing overflowed on filers with more than 2^31 inodes. Later the same company had a giant dataloss outage when the hot spare Veritas head, an expensive Sun machine, decided it was the master while the old master also thought so, and together they trashed the entire Oracle database.

Both hardware and software in those days were, largely, steaming piles of shit. I have a feeling that many people who express nostalgia for those old Unix systems were probably 4 years old at the relevant time. Actually living through it wasn't any fun. Linux saved us from all that.


> Linux saved us from all that.

My fingers still habitually run `sync` when they're idling because of my innumerable experiences with filesystem corruption and data loss on Linux during the 1990s. There were just too many bugs that caused corruption (memory or disk) or crashes under heavy load or niche cases, and your best bet at preserving your data was to minimize the window when the disk could be in a stale or, especially, inconsistent state. ext3, which implemented a journal and (modulo bugs) guaranteed constant consistent disk state, didn't come until 2001. XFS was ported to Linux also in 2001, though it was extremely unreliable (on Linux, at least) for several more years.

Of course, if you were mostly only serving read-only data via HTTP or FTP, or otherwise running typical 90s websites (Perl CGI, PHP, etc, with intrinsically resilient write patterns[1]), then Linux rocked. Mostly because of ergonomics and accessibility (cost, complexity); and the toolchain and development environment (GNU userland, distribution binary packages, etc) were the bigger reasons for that. Travails with commercial corporate software weren't very common because it was uncommon for vendors to port products to Linux and uncommon for people to bother running them, especially in scenarios where traditional Unix systems were used.

[1] Using something like GDBM was begging for unrecoverable corruption. Ironically, MySQL was fairly stable given the nature of usage patterns back then and their interaction with Linux' weak spots.


Multiple folks on Twitter hinted at inaccuracies in Dan Rose’s recollection of events at Amazon. In fact, when you mentioned Paul Davis, I realized I was looking through the comments to see him [1] point out these inaccuracies since he is known to hang out here on HN.

1: https://news.ycombinator.com/user?id=pauldavisthe1st


I can't point out inaccuracies for that time period, since I left in January of 1996.


We had 2-3 Sun boxes at Amazon German '99/2000 but I'll be honest it was a pet project by the local IT director. Even having a different shell on those annoyed me. Compaq/DEC Alpha was used for customer service, fulfillment etc.


It didn't use LaTeX anymore by the time I left in early 96. I had already templated the PDF that LaTeX used to generate, and written C code to instantiate the template for a given shipment.


Cool. I don't think I understood that aspect back then. I was tasked to look at converting the sun box sitting in the sodo DC to something else. I logged in and found latex but didn't understand it how it all fit together.


But SPARC v9 (UltraSPARC) was already 64-bit (44-bit effective) at the time. There must have been more to it than just 64-bit addressing.


There is a field of study around "mindset". The fairly famous book is called Mindset by Carol Dweck.

This infographic summarizes the ideas very well.

http://i.imgur.com/HGBY1tW.png


My attitude had always been "if it is grades people want, they will get it" i.e. to get good grades that will help me go up the ladder academically while at the same time keeping in mind that the grades are not going to help me in the real world.

This pushed me to learn continuously because at the end of the day, it is what you know and what you can do that counts rather than your grades.


A (the?) big reason Microsoft got the IBM contract was because Bill Gate's mom (Mary Gates) was on the board of United Way with the CEO of IBM at the time (John Opel.)

Bill Gate's parents are impressive on their own rights.

http://en.wikipedia.org/wiki/Mary_Maxwell_Gates "In 1980, she discussed with John Opel, a fellow committee member who was the chairman of the International Business Machines Corporation, her son's company. Mr. Opel, by some accounts, mentioned Mrs. Gates to other I.B.M. executives.

A few weeks later, I.B.M. took a chance by hiring Microsoft, then a small software firm, to develop an operating system for its first personal computer."


IBM approached Microsoft to provide various software for their PC. Microsoft said they could provide BASIC, but not an Operating System, and helpfully directed IBM to Digital Research. Digital Research passed on the onerous terms (close to free).

IBM went back to Microsoft and said "now what?". Microsoft, fearing they would lose the BASIC deal as well, then purchased the rights to QDOS or 86-DOS (a CP/M clone) for $50,000 from Seattle Computer Products, and did the deal with IBM, agreeing to the onerous terms that Digital Research wouldn't. Which as we all know didn't actually turn out to be very onerous at all.

This is well documented in Triumph of the Nerds.

Mary Maxwell Gates may have provided an introduction, but it was the chutzpah, genius and desperation of Bill Gates that got the deal done.


Per Walter Isaacson [1], the influence of Mary Gates was more to endorse and confirm the deal, than to serve as an introduction. During a business trip, Mary Gates mentioned to John Opel that "My son is doing business with your company", but he answered that he wasn't aware of any deal, and never heard of any "Micro-soft". On her return, she joked with Bill that his deals with IBM shouldn't be that important.

Several weeks later, when deal was ready to be signed, IBM execs ran the agreement by John Opel, who then mentioned "oh, this must be Mary Gates' son. She's great. Yes, go ahead".

[1] http://smile.amazon.com/The-Innovators-Hackers-Geniuses-Revo...


Misleading article. The news is that Kubernetes is going to support coreos's rkt. It already supports Docker.


Disclosure: I work at Google and am a co-founder of the Kubernetes project.

Correct. We already support Docker and plan to indefinitely. This is us extending support to rkt/appc also.


Thank you. Pretty shoddy on Wired's part.

We've updated the title. We can update it again if there's a better one.


I think the title should be "Kubernetes is going to support X" where X is either rkt or appc. CoreOS is a company, developing an open container spec (appc). rkt is CoreOS's implementation of that spec, but other implementations are possible. Currently X=CoreOS, which is most likely to imply CoreOS's operating system, and that isn't what the article is about.

(An analogy: appc=JVM, rkt=OpenJDK, CoreOS=Sun)


Ok, sure. Is it accurate now?


I believe so - thanks!


Brings back old memories of TheDraw in the DOS days. Made some ANSI art back for some pirate boards in the early 90s. Someone archived it and I was able to find one of the ones I did.

http://imgur.com/tscJzzs

If your interested in looking at amazing ones, look for Jed from the ACID group (or others?) His stuff was amazing. I wonder what he is doing now.



I'm not familiar with HAProxy, but with other load balancers. I've been looking into it as a replacement for the hardware load balancers we use.

Why do you have to reload haproxy? When you update the configuration?

I reload nginx all the time (nginx -s reload) and I'm not sure if that is a true zero-downtime reload either.

Interesting hack nonetheless (stopping SYNs.)


You should read that, it's very interesting:

http://nginx.org/en/docs/control.html


It is. Nginx launch a new worker while the other with the old config is still running, the master redirects new traffic to the worker and keeps the old worker until all previous requests has been handled. Once the old worker has finished the master process close it or when it time outs. It works even for updating nginx binaries.

It is a very useful approach and I use it all time as well.

I implemented the same in node by using the cluster api [1].

[1]: http://joseoncode.com/2015/01/18/reloading-node-with-no-down...


Right the configuration is loaded into memory, and you need to reload it to get any changes in.

I believe it's API let's you enable/disable existing configured servers but not dynamically add or remove them.


Reloading HAProxy is part of the SmartStack architecture, IIRC. SmartStack/synapse does polling on say, ZooKeeper or Docker or whatever, generates a new HAProxy configuration and then reloads it.


That's an important detail, as it explains why the tests aren't entirely flawed: Testing 10 reloads every second really distorts the numbers, assuming a reload every hour, or every few hours is more realistic. But if reloads are part of what amounts to doing live testing on a large part of live traffic -- then the exercise makes more sense.

And to be sure; being able to do ten reloads every second with few ill effects enables different, more nimble systems engineering.

But if we assume 2000 requests per second, per box - fighting ~100 reset connections a day (assuming two ha-proxy reconfigs) doesn't really seem worth the effort - packet loss and other outages would probably(?) domminate anyway.


> Testing 10 reloads every second really distorts the numbers, assuming a reload every hour, or every few hours is more realistic.

It depends on what you do. I've seen shops (successfully and, IMO, correctly) scaling AWS instances for services with a threshhold of every fifteen minutes, and I've seen Mesos clusters dynamically spinning up web instances much more nimbly than that (think every two minutes under spiky load--the instances would come up in five seconds, so it didn't hurt to down them).


Well, once every 120 seconds is still quite a leap from 10 every second...


Sure, but if it doesn't work there, I don't trust it to work if, say, a piece of my scheduler goes nuts and suddenly is upping and downing containers every few seconds. The problem remains, it's just not as acute and still must be fixed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: