Of course! What researcher would want to work for a company that prohibits scientific publications?
I've always been amazed by their attitude.
Look at Microsoft Research, and their enormous scientific output over the years. IBM and Google look bleak by comparison, and Apple is not even on the chart.
If you consider the number of papers published over the lifetime of IBM, and the fact that IBM Research does work in all levels of computing (materials science -> software), Microsoft's research output looks minuscule in comparison. But if you're considering only CS research, then you might be right.
On a similar note, Intel is another company that is very active in research and has published a significant amount of peer-reviewed publications. Samsung Research is yet another; they have an amazing presence at circuit design conferences, for instance.
And all of these, in terms of quality of research output, pale by about an order of magnitude in comparison to Bell Labs for much of its existence (particularly in the era ~1940 to mid 1980s). They were the R&D benchmark for most improvements to IBM Research from the 1960s onward. Additionally IBM sought out its alumni and populated their ranks with notable management and research minds from the Bell Labs stable.
Not sure if anything can be considered successful when compared with Bell Labs. It is a very high bar to meet (given they mostly invented transistors, information theory, unix and c).
The modern world is to great extent a byproduct of the experiment that was Bell Labs. Some laugh, and say that it was experiment that has gone awry.
Bell Labs was special in that (a) they were building the largest system known to mankind at the time (an effort only rivaled by Manhattan snd Apollo projects) where they (b) couldn't use standard components but had to invent every piece by themselves, from the cables to amplifiers to satellites and lasers. Furthermore they were (c) very special in that they helped the government on certain projects like Manhattan, got a decades-long minopoly in return, and had to make their patent portfolio publicly available. In other words they were a semi-public organization with a very well-defined goal, that of building and optimizing the telephony system. Only when it became clear that a monopoly was no longer needed due to advances in technology did Bell lose its privileges, and with it Bell Labs started to wither.
It does seem to be a pattern that companies that have created monopolies tend to use their excess money to start showing behaviors of a public service. Namely the research departments of Xerox, IBM, Microsoft, Google and now facebook all come from current or former (quasi-)monopolies.
"A monopoly like Google is different. Since it doesn't have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google's motto—"Don't be evil"—is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can't"
But Google had its culture before it got rich didn't it?
There are also a lot of small tech companies like 37 signals and fog creak that try to be ethical and and treat their employees well. I don't think thiel could argue that they are monopolies.
Nor frankly do I buy that Google is a monopoly. But that's beside the point.
> And yet it didn't work out well for the company that funded it.
I get that point. We're all interested in building great and successful companies on here, so of course it's a bummer when there is success from one perspective, but the whole thing kinda doesn't work out overall.
But I also want to ask: As a society, don't we benefit so much from any advancements made in the open, through publicly shared research as well as the open source movement, that sometimes we would do well to just bask in the glory of those advancements and never mind what individual entities (financially, structurally) stuck around or not, profited or not, in the procuring of said advancements?
It's also very clear of course that it is beneficial to look at the past in a discerning way, learn from it, make it better now and in the future. Still, there's something about the thought in the paragraph above that I wanted to bring up.
I just meant that, in the context of this post, and given the poor history of corporate research turning into corporate profits, I can understand Apple's reluctance to fund blue sky research.
I do admire a company that contributes in that way, recognizing that it's a contribution to humanity rather than investment for future profits.
>... (inventors of the mouse, GUI, and object oriented programming),
Xerox PARC had a number of notable inventions and they created the Alto computer which had a bitmapped screen with the desktop metaphor, but they did not invent the mouse or object oriented programming.
In terms of the computer mouse:
>...Independently, Douglas Engelbart at the Stanford Research Institute (now SRI International) invented his first mouse prototype in the 1960s with the assistance of his lead engineer Bill English.
Xerox failed to profit from them because their management wasn't able to identify the amazing inventions made in their company. It is similar to what they say about Tesla, "when his generation wanted electric toasters, he invented electric cars and wireless electricity"
Xerox failed to profit directly, because the Alto was designed as a hyped-up minicomputer to compete with systems from DEC and IBM - a reasonable choice, because that's what business computing looked like back then.
Xerox didn't lack the commercial understanding to sell Alto+spinoffs, it lacked the understanding to realise you could build a developer ecosystem to support your hardware and make it the de facto standard.
DEC and IBM didn't understand this either. Gates and Jobs totally understood it, which is why Windows became a business standard and the Mac became the only serious business/home alternative.
But Xerox still did okay, because the use of GUI software transformed office culture and made it much more visual - which meant very steady sales of copiers and printers.
Xerox's stock price climbed steadily through the 1990s while paper remained a thing.
After the dot com crash, GUIs and screens had evolved to the point where paper became non-essential, and Xerox never entirely recovered - although you can still find a few people who print out and file all their emails.
tl;dr Xerox did very nicely indeed from Alto etc in an indirect way, for at least a decade or so.
> "And yet it didn't work out well for the company that funded it."
I'd suggest that was due to the anti-trust case against Bell more than anything else. For example, I believe AT&T were forbidden from selling Unix direct to consumers for many years, leading them to licence Unix to other entities (in the business and academic worlds).
I very much doubt it would have been adopted in such scale, specially for the then startups trying to start a workstation market, if Bell could sell it at the same prices other OSes were being sold.
Possibly, possibly not. Consider the competition Unix had at the time it was released. A cut down version could've also made inroads into the desktop market (for business users).
If it would be priced at the same level of VMS or mainframe OSes, I am not sure.
Plus maybe the Xerox PARC attempts would been more successful if there weren't a cheaper UNIX workstation as alternative, in spite how they managed the whole process.
> "If it would be priced at the same level of VMS or mainframe OSes, I am not sure."
Why would it have needed to be priced that high? We're talking about software here, the cost of reproduction is close to zero. It could've competed in the same market space as CP/M.
As someone who doesn't know a whole lot about Bell Labs, its history, how it worked, etc: Why and how did it produce so much great research output? What was different or special about Bell Labs? Answers as well as pointers to other material appreciated!
They had an incentive to doggedly seek IP so that they could remain as entrenched as possible. They didn't plan on actually profiting from it in most cases, they just wanted to be the first ones there so that nobody else could be.
Especially since Bell Labs sat within a state-sponsored (enforced?) monopoly until the mid-80s, so there was no direct concern about revenue. Which is not to belittle them in the least as they made outstanding organizational decisions, and had the most enviable pure research->development->production pipeline to this day.
In terms of the R&D USA, the closest comparison is IBM (which also enjoyed a monopoly for some of its existence). MS's monopoly, with the benefit of hindsight, was feeble in comparison even to IBM's -- though they, like IBM, are absolutely one of the greats and have redeemed themselves particularly in the past few years.
The author confused continuity of the DECISION FUNCTION with continuity of the OUTCOME CURVE.
In other words, an algorithm such as "keep trying to make a decision until you notice that t > C, after which always pick the left one" will in fact NOT have a continuous outcome curve, despite the decision function being continuous.
Even the best scientists are not machines. It is naïve to think that the best papers are flawless. (I used to think so when starting grad school.) They is not.
I should admit that after writing this, I went back and forth with him over email and he convinced me he was right, after all.
That continuity assumption he made is there in Newtonian Mechanics and other very widely accepted models. Something just feels off about the whole assumption. maybe I spent too much time with digital computers. Those models have a hard time explaining the non-reversibility of time also.
Unfortunately, my prior reply here was flagged, but I'd like to try again. Here's a quote from Craig Federighi, senior VP of software engineering, less than half a year ago:
“Our practices tend to reinforce a natural selection bias — those who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing,” says Federighi.
In my opinion, this gives you a view of the thinking that goes inside Apple. Publishing papers correlates negatively with being a team player and delivering great products, and here Apple is on record as saying that they do not want that sort of people working there.
> those who...deliver a great product versus those whose primary motivation is publishing
Interesting quote. Federighi seems a tad condescending - in that producing great products and producing publications are mutually exclusive.
Whilst Apple might expect their engineers to toil away anonymously under NDA without much recognition outside the organization, the world of academic research does not work like that at all.
I guess proof of the pudding is in the eating and in my experience with Apple's AI efforts they are far behind their competitors. I for one am glad Apple are going to start publishing research - it should attract talent, foster sharing and ultimately result in better AI products.
Microsoft is 23 years older than Google and it usually takes time for a corporation to create an R&D lab. Given Google's rich academic roots this happened faster there than usual. Google is doing a good job given that it's a relatively young company. I hope their research output continues to increase in quality and volume.
If you go by publication count alone, I suspect that IBM Research is still near the top. They have a large research organization that made some fundamental contributions in computer science in everything from speech recognition to databases.
And of course, Microsoft has been in business 41 years where as Google has been 18. I'm guessing publication rates between the two are comparable. I wouldn't be surprised if IBM were similar. All three are highly committed to research.
Regardless, comparing the number of papers is meaningless. A better metric is comparing how many significant advances resulted from research funded by the company.
Using the second metric, the winner in all cases - without negotiation - would be AT&T Bell Labs.
Perhaps we could develop an algorithm that uses the number of citations from other papers to rank the importance of each paper, a "PageRank" if you will.
There is something similar for ranking people. H-Index. Given the list of people in each lab it might be possible to quantify the output of a group using that.
Funny how I'm working on exactly that for my major project in Bachelors next semester. I don't have it figured out but it's a problem I'd like to solve.
What's even more amazing is how they followed up with the invention of solar cells, CCD for digital cameras, the MOSFET, etc. So not only did they invent the base (transistor), but they continued to make more inventions using it.
More groundbreaking than other developments singly - probabably; combined? not a chance. Other 20th century developments include flight, rocketry, the United Nations, quantum physics, splitting the atom, the M.A.D. principle, assembly line production, robotics, analog computers (including those used for AA targeting systems in WWII, predating the transistor), among many, many others.
It's easy to romanticize the old Ma Bell especially given the fantastic work done by Bell Labs.
I'd also remind you of the cost of a telephone system with monopoly status that didn't even let you install your own phones in the house for a long time. And which led too skits like Lily Tomlin's "We don't care. We don't have to. We're the phone company."
Bell Labs was able to exist in large part because AT&T was a quasi-government regulated monopoly. You couldn't really have one without the other. (And I'm not sure I'd even say at&t had shady business practices. But they were a de facto monopoly.)
I have no skin in the game in regards to who produces more research. I was just trying to give some data. Also, Microsoft Research has been around since 1991.
# of papers published is a poor metric (google's published like 40% as many papers, over a shorter period?)
MSR has done cool work and important work, but google's publications have been often been major paradigm shifts. In a lot of ways they're currently way out ahead of everyone else.
MSR was one of the first companies bringing FP into mainstream developers tools with LINQ and F#.
Investing into dependent type programming via F*.
Making JavaScript scale via TypeScript (adopted by Google teams instead of Dart).
Sponsors Haskell and OCaml research.
Researches OS design that aren't yet another UNIX clone, namely memory safe OSes with Singularity, Midori, Drawbrige, theorem provers for device driver validation, P language, micro-kernels.
>Look at Microsoft Research, and their enormous scientific output over the years. IBM and Google look bleak by comparison, and Apple is not even on the chart.
Do you have any metrics to back up this dubious claim? I find it very hard to believe Microsoft is even in the same ballpark as IBM considering all of the research centers IBM has throughout the world and the number of years they've been in business.
My hope is that Apple will continue research on privacy-protecting and privacy-enhancing machine learning, because out of all the big tech companies using machine learning, they may just be the only ones to do that. Some random research for privacy technologies may come out of Google, too, but they are much less likely to actually use them at scale, especially if they conflict with ad revenue.
Which is why I used "paint", not "said". Not that I expect a company to explicitly say in their big day that others were already doing it, of course.
But my impression from the presentation, as someone who had never heard of differential privacy before, was that this was brand new research from some professor, which they contracted to help them apply it to the real world. Definitely not "this is a technology that's used to achieve such and such, this is how it works".
> Researchers say among the reasons Apple has failed to keep pace is its unwillingness to allow its AI engineers to publish scientific papers, stymieing its ability to feed off wider advances in the field.
I don't follow. How would preventing employees from writing papers would stop them from reading papers?
Modern AI research heavily depends on building work of others as well as having others build on your work. It's this virtuous cycle that is bringing all the advances we are seeing. Let's say someone at Apple had invented recurrent networks but they never published it. That would mean they would miss out on literally hundreds of researchers working on this, enhancing techniques in very critical ways and writing thousands of papers over the period of just few years that would feed in to this cycle. Many of these papers would contribute absolutely important enhancements to productionize things.
Much of the AI research currently starts with some seed idea like GAN which looks cool in small experiments but have tons of blind spots that needs to iron out. Unlike typical product efforts where you can probably work your way out through brute force engineering, advances in most AI related areas requires massive amount of collaboration, mathematical acrobatics, trial-and-error and cross-pollination across fields consuming multi-person-years before yielding fruits. In theory, Apple can still keep any silver bullets they find in the field secret but in practice such silver bullets are rare and advances are very incremental spanning over many years and many people.
Yes. Researchers like to publish their research because their publication record the metric they're evaluated on by the outside world. If you can't publish your research, that will significantly reduce your ability to get hired by another top research group, industrial or academic, in the future.
If the could publish their research other researchers would take their work in a new direction that they didn't think of, which they could then bring back.
Basically they can't provide that seed insight that motivates others.
I'm just at NIPS and Apple does indeed have a decent presence, which is a good sign (even if there are no actual publications yet). Super interesting to see how this will play out compared to their past. There's lots of criticism that we can throw at big tech companies, but the fact they many of them are so open about their research and thus are forcing others to do the same is pretty cool.
I completely agree. But still I find it surprising that apple managed to hire Salakhutdinov. He is a huge name in Deep Learning (with a focus on math, unsupervised learning and autoencoders)
So glad that this is happening. It is really inspiring to see the knowledge sharing spirit become the expected default in the community. This is only going to be great for progress in the field!
Not sure how many papers they publish, but I do really use this paper[1] from James Hamilton (literally the Architect of AWS) as the defacto way of building large distributed services:
Bloomberg writes the same story an hour ago and it finally gets huge traction on HN. My guess is that many people ignore the "new" page and it's all a matter of luck that 3 or 4 people get it to the front page where a story takes off.
BusinessInsider and Mac blogs tend to be less reputable sources than Bloomberg, whose articles regularly make it to the HN front page. Might have contributed to this particular submission's success.
That's the whole point of publishing, to share what goes beyond currently-available work. But keep in mind that DeepMind is only one institution, not the font of all things AI.
I've always been amazed by their attitude.
Look at Microsoft Research, and their enormous scientific output over the years. IBM and Google look bleak by comparison, and Apple is not even on the chart.