And all of these, in terms of quality of research output, pale by about an order of magnitude in comparison to Bell Labs for much of its existence (particularly in the era ~1940 to mid 1980s). They were the R&D benchmark for most improvements to IBM Research from the 1960s onward. Additionally IBM sought out its alumni and populated their ranks with notable management and research minds from the Bell Labs stable.
Not sure if anything can be considered successful when compared with Bell Labs. It is a very high bar to meet (given they mostly invented transistors, information theory, unix and c).
The modern world is to great extent a byproduct of the experiment that was Bell Labs. Some laugh, and say that it was experiment that has gone awry.
Bell Labs was special in that (a) they were building the largest system known to mankind at the time (an effort only rivaled by Manhattan snd Apollo projects) where they (b) couldn't use standard components but had to invent every piece by themselves, from the cables to amplifiers to satellites and lasers. Furthermore they were (c) very special in that they helped the government on certain projects like Manhattan, got a decades-long minopoly in return, and had to make their patent portfolio publicly available. In other words they were a semi-public organization with a very well-defined goal, that of building and optimizing the telephony system. Only when it became clear that a monopoly was no longer needed due to advances in technology did Bell lose its privileges, and with it Bell Labs started to wither.
It does seem to be a pattern that companies that have created monopolies tend to use their excess money to start showing behaviors of a public service. Namely the research departments of Xerox, IBM, Microsoft, Google and now facebook all come from current or former (quasi-)monopolies.
"A monopoly like Google is different. Since it doesn't have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google's motto—"Don't be evil"—is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can't"
But Google had its culture before it got rich didn't it?
There are also a lot of small tech companies like 37 signals and fog creak that try to be ethical and and treat their employees well. I don't think thiel could argue that they are monopolies.
Nor frankly do I buy that Google is a monopoly. But that's beside the point.
> And yet it didn't work out well for the company that funded it.
I get that point. We're all interested in building great and successful companies on here, so of course it's a bummer when there is success from one perspective, but the whole thing kinda doesn't work out overall.
But I also want to ask: As a society, don't we benefit so much from any advancements made in the open, through publicly shared research as well as the open source movement, that sometimes we would do well to just bask in the glory of those advancements and never mind what individual entities (financially, structurally) stuck around or not, profited or not, in the procuring of said advancements?
It's also very clear of course that it is beneficial to look at the past in a discerning way, learn from it, make it better now and in the future. Still, there's something about the thought in the paragraph above that I wanted to bring up.
I just meant that, in the context of this post, and given the poor history of corporate research turning into corporate profits, I can understand Apple's reluctance to fund blue sky research.
I do admire a company that contributes in that way, recognizing that it's a contribution to humanity rather than investment for future profits.
>... (inventors of the mouse, GUI, and object oriented programming),
Xerox PARC had a number of notable inventions and they created the Alto computer which had a bitmapped screen with the desktop metaphor, but they did not invent the mouse or object oriented programming.
In terms of the computer mouse:
>...Independently, Douglas Engelbart at the Stanford Research Institute (now SRI International) invented his first mouse prototype in the 1960s with the assistance of his lead engineer Bill English.
Xerox failed to profit from them because their management wasn't able to identify the amazing inventions made in their company. It is similar to what they say about Tesla, "when his generation wanted electric toasters, he invented electric cars and wireless electricity"
Xerox failed to profit directly, because the Alto was designed as a hyped-up minicomputer to compete with systems from DEC and IBM - a reasonable choice, because that's what business computing looked like back then.
Xerox didn't lack the commercial understanding to sell Alto+spinoffs, it lacked the understanding to realise you could build a developer ecosystem to support your hardware and make it the de facto standard.
DEC and IBM didn't understand this either. Gates and Jobs totally understood it, which is why Windows became a business standard and the Mac became the only serious business/home alternative.
But Xerox still did okay, because the use of GUI software transformed office culture and made it much more visual - which meant very steady sales of copiers and printers.
Xerox's stock price climbed steadily through the 1990s while paper remained a thing.
After the dot com crash, GUIs and screens had evolved to the point where paper became non-essential, and Xerox never entirely recovered - although you can still find a few people who print out and file all their emails.
tl;dr Xerox did very nicely indeed from Alto etc in an indirect way, for at least a decade or so.
> "And yet it didn't work out well for the company that funded it."
I'd suggest that was due to the anti-trust case against Bell more than anything else. For example, I believe AT&T were forbidden from selling Unix direct to consumers for many years, leading them to licence Unix to other entities (in the business and academic worlds).
I very much doubt it would have been adopted in such scale, specially for the then startups trying to start a workstation market, if Bell could sell it at the same prices other OSes were being sold.
Possibly, possibly not. Consider the competition Unix had at the time it was released. A cut down version could've also made inroads into the desktop market (for business users).
If it would be priced at the same level of VMS or mainframe OSes, I am not sure.
Plus maybe the Xerox PARC attempts would been more successful if there weren't a cheaper UNIX workstation as alternative, in spite how they managed the whole process.
> "If it would be priced at the same level of VMS or mainframe OSes, I am not sure."
Why would it have needed to be priced that high? We're talking about software here, the cost of reproduction is close to zero. It could've competed in the same market space as CP/M.
As someone who doesn't know a whole lot about Bell Labs, its history, how it worked, etc: Why and how did it produce so much great research output? What was different or special about Bell Labs? Answers as well as pointers to other material appreciated!
They had an incentive to doggedly seek IP so that they could remain as entrenched as possible. They didn't plan on actually profiting from it in most cases, they just wanted to be the first ones there so that nobody else could be.
Especially since Bell Labs sat within a state-sponsored (enforced?) monopoly until the mid-80s, so there was no direct concern about revenue. Which is not to belittle them in the least as they made outstanding organizational decisions, and had the most enviable pure research->development->production pipeline to this day.
In terms of the R&D USA, the closest comparison is IBM (which also enjoyed a monopoly for some of its existence). MS's monopoly, with the benefit of hindsight, was feeble in comparison even to IBM's -- though they, like IBM, are absolutely one of the greats and have redeemed themselves particularly in the past few years.
The author confused continuity of the DECISION FUNCTION with continuity of the OUTCOME CURVE.
In other words, an algorithm such as "keep trying to make a decision until you notice that t > C, after which always pick the left one" will in fact NOT have a continuous outcome curve, despite the decision function being continuous.
Even the best scientists are not machines. It is naïve to think that the best papers are flawless. (I used to think so when starting grad school.) They is not.
I should admit that after writing this, I went back and forth with him over email and he convinced me he was right, after all.
That continuity assumption he made is there in Newtonian Mechanics and other very widely accepted models. Something just feels off about the whole assumption. maybe I spent too much time with digital computers. Those models have a hard time explaining the non-reversibility of time also.