Hacker News new | past | comments | ask | show | jobs | submit login

Currently work at Google, obviously speaking for myself and not my employer.

In my experience - yes, this moral decay is a necessary part of the current corporate world. Or any corporate world. Or really, any world without the possibility of failure baked into it.

I've seen a lot of idiotic decisions made at Google, many of which have been complained about on Hacker News, many more of which are hidden causes of things that are complained about on Hacker News. In every case, when I looked at the chain of decisions that led to things being the way they are, every single decision was rational, given the information that all participants had at the time. There's no vast conspiracy dedicated to turning Google evil, no influx of incompetent new PMs & designers. Some of the most questionable decisions have come straight from old timers like Marissa, or even from Larry Page.

Instead, it's an information problem. Running any enterprise the size of Google or Goldman Sachs requires trading off many competing factors. To make the tradeoff, someone has to keep all that information in their head at once. There's no other way to balance competing demands; if you keep only part of the information in your head, your decision will be biased towards the part that you've loaded into your brain. If you try to spread decision making across multiple people, the decisions will be biased towards the part that the person who screams the loudest can hold in his head (which is usually a smaller subset than optimal; it takes mental effort to scream loudly).

I often see mystified posters on HN wondering why Google did something or other, and a good amount of the time, I know (but can't say) exactly why we did it. The userbase does not have all the information. Unfortunately, they don't care that they don't have all the information; they want Google to work as expected, and the fact that there may be internal systems that don't quite behave according to their mental model is irrelevant. And so the fact that decision makers make decisions based on information that users can't have becomes a liability in this case, biasing them away from what's "good" for the user.

I remember Paul Buchheit writing here, several years ago, "A system's participants don't have to be rational for the system itself to be rational", referring to market economies. I'd posit that the inverse also holds: a system with completely rational participants can still be irrational, if information flow between participants is not organized in a rational way.




I really think that you're completely missing the point in defending Google and maybe Goldman Sachs by saying that their decisions are ok because they are made rationally.

Rationality is emotionless and mechanical. It's about making a reasonable decision based on whatever information is available to you. However, rational decisions do not involve morals, culture, or feelings. This is exactly what companies like Google and Goldman Sachs are being criticized for.

When game theory is baked into your corporate culture, this is what you get. The company starts an inevitable slide from "Do No Evil" into "Make the Best Decision You Can With the Information You Have".

If I look down into my wallet and see no money there, and I'm hungry for lunch, and I decide to steal some money from a little old lady, that may be a perfectly rational decision to make. An outside observer may say I'm being evil, but they don't have a complete information picture about how hungry I am, or how long the line at the ATM is, or that everyone else is eating lunch so I have a duty to my shareholders to do the same.


Rationality doesn't necessarily exclude "morals, culture, or feelings". That would imply that having a rational discussion about culture, e.g. anthropology, is impossible.

Gus Levy, a former senior Goldman partner, coined the firm's then philosophy of being "long-term greedy". Taking image, "headline risk" in finance parlance, impact on recruiting, etc. into account is part of rational decision making.

What makes a seemingly terrible decision rational is usually that the time-frame invoked is too short. If a decision looks rational in the long-term but conflicts with our value system it generally means that our value system needs to be re-evaluated.


>Rationality doesn't necessarily exclude "morals, culture, or feelings". That would imply that having a rational discussion about culture, e.g. anthropology, is impossible.

No I am not implying that the process of making rational decisions has anything to do with the process for holding rational discussions about stuff (the stuff may be rational or not).


Rationality can serve whatever values you have. You can rationally optimize the amount of love, happiness, and fuzzy puppies in the world if you want to. You can also rationally strive to keep a company efficient and non-evil. But the bigger it gets, the harder that gets, modulo economies of scale.


Agreed. At certain scales, it no longer makes sense to give primary mover status to humans within institutions. It is important to realize that there are several mechanisms within institutions that alter the values they serve. It makes more sense to treat them as black boxes and reason about outcomes rather than intentions.


> If I look down into my wallet and see no money there, and I'm hungry for lunch, and I decide to steal some money from a little old lady, that may be a perfectly rational decision to make.

And that reasoning is the very definition of the term unethical.

See https://en.wikipedia.org/wiki/Ethics


Rational decisions correctly apply information and resources to optimize values that the decider cares about. (meta level: you can also be rational in deciding how much effort to allocate to MAKING a particular decision - deliberating or gathering more info).

Those values can include morals, culture, and feelings.

http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawVulcan


You can't talk about rational decisions without asking, "across what timeframe?"

http://www.cs.utexas.edu/~EWD/transcriptions/EWD11xx/EWD1175... (search for 'buxton index')


The guy never made a moral criticism that wasn't immediately followed by "plus it will end up costing the bottom line in the long run".


You miss the point, which is that good intentions are not transitive. It it not enough that a series of acts be individually kind. The interfaces between the acts must provide end-to-end kindness, or the results may well be ghastly. In giant projects this is a hard problem to deal with.


Most decisions are made under incomplete information. So the principles that guide you in uncertainty are very important. I think it's quite rational to select principles that guide you well even under uncertainty, even if their immediate conclusions don't seem maximizing. Likewise it's rational to use principles that will be comprehensible to those observing you, so they can predict your behavior and retain trust in your decisions.

To the parent post question, of avoiding deterioration of institutional culture from such principles there are two answers. Best is to align corporate equity interests with long-term interests, which are generally customer interests. That generally means not going public, as stock market attention to short-term interests is a constant distraction from long-term interests. Failing that, be lead by a mutant like Buffett or Jobs, who understand the long-term interests and have the authority to ignore the short-term to get long. But the availability of mutants is unpredictable and it's really best to get the capital structured properly.


I am not denying that corporate culture takes on a life of its own, and that the system's rationality and the participants are not closely linked.

What I am asking is whether great places to work inevitable decay as the company gets large, and if that is the case, what is the point of a startup? Why not seek to make an idea that scales down rather than scales up? Is it all about personal wealth? But for those of us who want to build great businesses, in every sense of the word 'great,' how do we get around this problem?

I have my own ideas but they are largely untested.....


What I am asking is whether great places to work inevitable decay as the company gets large

Odds are you will definitely lose that personal connection as you go from working for "the owner" to working for a manager that reports to a VP that reports to the CEO that reports to the Board that reports to the shareholders.

But what I've seen in a lot of these "leaving" posts isn't quite that... It's more of a "the king has no clothes" epiphany: They go to work at a place thinking it's a thing-in-itself then wake up one day and realize how the sausage is being made.

So GOOG isn't a post-grad research lab: It's a company that sells ads. So GS isn't the equivalent of a fee only financial planner.

They never were more than that.. only the people involved thought they were something more.


What lessons can be learned from managing an open source community that can be applied to a formal corporation?


I served as a VP & President for a small non-profit club.

Two takeaways:

1) You don't really have any power. You need to lead by example and empower other people rather than command. 2) The personality required to establish a project is often different than the personalities required to keep it going.


But on top of that, I wonder how much management could be eliminated. Management is, in most businesses, fundamentally a communication infrastructure. How much of it can be replaced through implementing IT through was that work for FOSS projects?

My thinking is to have a relatively small group of high level managers, a group of project managers, and an HR department, and eliminate all middle management. I think teams should have rotating leadership but coordination should take place in ways which include both the upper management and folks on the floor directly using things like email lists.

Maybe this is a pipe dream. But maybe it can be made to work.....


Rotating/randomize leadership is called "sortition" and it has been known since Roman Empire times as a solution to the corruption and inefficiency problems of hierarchy and representative democracy. It also solves the problem of bizarre distortion when you need to choose a winner among 100 qualified candidates -- better to pick one well rounded winner at random than to choose purely based on metrics that promote "teaching to the test".


At least at Google, middle management doesn't exist to run projects. They exist to provide career guidance, ensure people are happy, and keep them from leaving for Facebook.

This is a task that doesn't scale, because it requires knowing your reports well enough, as a person, that you understand their career goals, their likes & dislikes, their strengths & weaknesses, etc. so you can steer them into the right role. It's the tech lead's job to manage the (engineering half of) the project, and the tech lead frequently doesn't manage any of the people involved. I've found that managers can rarely manage more than 20 people effectively, and usually drop off sharply in effectiveness after 8-10 people.

Open source projects don't face this limitation, because your way of ensuring that everyone's happy is to assume that everyone who's not happy has quit. I suppose some big companies do this too - Yahoo seems to be trying out this strategy right now - but it really doesn't go over well with the public at large, and it wastes a lot of effort spent investing in new employees.


Presumably it has something to do with external stakeholders. The more influence they get, the more they can drive the company towards short-termism and potential destruction. You either need an incisive leader, or a bootstrapped business.


On philosophical theoretic grounds:

I believe that such "decay" processes are inevitable in the long run, following organizational growth as time goes by. Maybe in the advanced management theory someone has yet to formulate the laws/principles of corporate thermodynamics. (This is not my original thought, I've read it somewhere but can't remember the source).

On a more practical grounds, a few seeming counter examples:

- Virgin Group is a highly decentralized conglomerate of 300+ businesses all over the world and each of them is mostly autonomous (Though when you think of Virgin Group there is only one personality springing in mind - that of Richard Branson)

- Well, Apple, of course.. Though some say Apple of the 2000's after Steve Job's return is a different company entirely (in most of the aspects but those concerning mainly legal formalities around the corporate entity, its registration details, and logo design principles).

P.S.

So the point of a startup to a business is that of a birth and early childhood to a human.

Businesses (as functional organizations) and humans (as living beings) have ultimately the same fate in the end, though the time-scales differ.


Virgin has always described itself as a branded VC company, not a large business.


> Unfortunately, they don't care that they don't have all the information; they want Google to work as expected,

OTOH, if they had all the information, they might set new expectations. Essentially you are describing a corporation that is failing to communicate properly.


Completely agree on the effect you are describing. I've encountered it, too. I think it can be easily countered through regularly context switching, though. That is, switching your thinking over to that of an end user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: