Hacker News new | past | comments | ask | show | jobs | submit login
LeCun's 2022 paper on autonomous machine intelligence does not cite prior work (idsia.ch)
164 points by hardmaru on July 7, 2022 | hide | past | favorite | 88 comments



Lecun's post is not even a paper as stated in the foreward of the paper. But thanks for bringing it up, i will read it

> This document is not a technical nor scholarly paper in the traditional sense, but a position paper expressing my vision for a path towards intelligent machines that learn more like animals and humans, that can reason and plan, and whose behavior is driven by intrinsic objectives, rather than by hard-wired programs, external supervision, or external rewards. Many ideas described in this paper (almost all of them) have been formulated by many authors in various contexts in various form. The present piece does not claim priority for any of them but presents a proposal for how to assemble them into a consistent whole. In particular, the piece pinpoints the challenges ahead. It also lists a number of avenues that are likely or unlikely to succeed.


"intrinsic objectives" and "hard-wired programs" sound alike. Given that both must be designed.


Meaning with defined goals (intrinsic objectives) instead of defined operations (hard-wired programs).


A defined goal is just a defined operation with more steps.

It's inescapable.


Once you've committed to performing an operation it becomes a goal. As the old saying goes, it's turtles all the way down...


Which makes us living creatures look rather different from machines. Not to get religious about it.


declarative vs procedural?

i.e. saying "what to" instead of saying "how to"

???


Well you can’t just state “this isn’t a paper” at a beginning of a paper to get an exemption from the rules and traditions of scientific discourse. It’s like the people believing it’s not a crime to pay with counterfeit money if the signature reads “Donald Duck”.


It s not the same at all. You can make a blog post or an HN comment, or really anything and put it out there without following the rules of science journals. This is the essentially a blog post in PDF


It's a position paper, academics publish them all the time, they're very much a part of scientific discourse. Just different than an experimental results paper.


It's actually on an academic review site called OpenReview where the dispute is ongoing: https://openreview.net/forum?id=BZ5a1r-kVsf

LeCun claims four "main original contributions" and Schmidhuber basically debunks them one by one, for example:

> (IV) your predictive differentiable models "for hierarchical planning under uncertainty" - you write: "One question that is left unanswered is how the configurator can learn to decompose a complex task into a sequence of subgoals that can individually be accomplished by the agent. I shall leave this question open for future investigation."

> Far from a future investigation, I published exactly this over 3 decades ago: a controller NN gets extra command inputs of the form (start, goal). An evaluator NN learns to predict the expected costs of going from start to goal. A differentiable (R)NN-based subgoal generator also sees (start, goal), and uses (copies of) the evaluator NN to learn by gradient descent a sequence of cost-minimizing intermediate subgoals [HRL1].

It will be interesting to follow this.


background from the great Genius Makers history of ML:

The press needed heroes for its AI narrative. It chose Hinton, LeCun, Bengio, and sometimes Ng, thanks largely to the promotional efforts of Google and Facebook. The narrative did not extend to Jürgen Schmidhuber, the German researcher based on Lake Lugano who carried the torch for neural networks in Europe during the 1990s and 2000s. Some took issue with Schmidhuber's exclusion, including Schmidhuber. In 2005, he and Alex Graves, the researcher who later joined DeepMind, had published a paper describing a speech recognition system based on LSTMs-neural networks with short-term memory. "This is that crazy Schmidhuber stuff," Hinton had told himself, "but it actually works." Now the technology was powering speech services at companies like Google and Microsoft, and Schmidhuber wanted his due. After Hinton, LeCun, and Bengio published a paper on the rise of deep learning in Nature, he wrote a critique arguing that "the Canadians" weren't as influential as they influence would soon reverberate across the industry-Schmidhuber stood up in the audience and chastised him for not citing similar work in Switzerland from the 1990s. He did this kind of thing so often, it became its own verb-as in: "You have been Schmidhubered."


Back in the early 2000's (when I first started getting into this stuff), Schmidhuber's ideas were absolutely off-the-charts revolutionary compared to anything any other lab was doing. The level of creativity coming out of IDSIA was insane. I truly believe he has not been given even slightly the level of recognition he deserves. Many of the original founders of DeepMind are his direct students. E.g. Alex Graves, who is basically the pioneer of "attention" in neural networks (and we all know what architecture that led to...)


Reading some Schmidhuber's writings on this issue is a great way to learn the history and development of AI and ANN's that is often missed. At the same time Schmidhuber's tone and anal-retentive attitude explains why he is considered so annoying.

The field is full of huge egos and overly simplified narratives from a singular point of view.

For example, it's true that Hinton was a torchbearer of neural networks for several decades and was off mainstream. That gives him well earned legendary status. But it's also true that other methods worked better until early or mid 2000s and had more rigorous theory behind it (theory behind older methods is still better, nn is still mostly experimenting and wandering). Hinton also had position and funding to do pioneering research and collect students all over the world. I remember reading his product of experts paper as a student in 2002 when I worked with boosting and bagging, so clearly he was not completely out of sight.


> At the same time Schmidhuber's tone and anal-retentive attitude explains why he is considered so annoying.

The people calling him annoying are generally hangers-on trying to display that they're part of the group - people who took a class once 20 years ago and think they need now to come explain.

He's a lovely man, and he's 100% correct to be angry that people are getting rich and famous while not citing him. It's an ethical disaster.

I really wish HN people wouldn't pile onto him this way.

.

> The field is full of huge egos and overly simplified narratives from a singular point of view.

Stop it. Primacy in publishing is a core topic in science. If these people were at universities, and not Google or Facebook, they'd have been fired for what they're doing.

Schmidthuber doesn't actually have a huge ego. He's just saying "guys, stop re-publishing my papers and claiming you came up with it."

You should really be deeply disgusted in Hinton right now, not explaining Schmidthuber's ego. Hinton is being a plagiarist *and he knows it*.

This isn't an accident and it isn't the first time.

Hinton is committing fraud, according to academic doctrine. This is an automatic termination at most universities on Earth, even through tenure.

It's time to stop making excuses for these rich men cherry picking other peoples' work for their and their employers' gain.

Schmidthuber is, if anything, being remarkably calm and polite, for having been robbed blind by megacorporations of credit and wealth for decades.


> Hinton is being a plagiarist and he knows it.

What are some of the papers he has plagiarized?



These are some examples of non citations in talks (not papers), not plagiarism. I will discuss a representative example here.

> 9. LBH claim ReLUs enabled deep learning to outperform previous methods for object recognition, referring to their GPU-based ImageNet 2012 winner called AlexNet,[GPUCNN4] without mentioning that our earlier groundbreaking deep GPU-based DanNet[GPUCNN1-3,5-8][DAN] did not need ReLUs at all to win 4 earlier object recognition competitions and to achieve superhuman results already in 2011[GPUCNN1-8][R5-6] (see Sec. XIV).

If we click and look into the details, Alexnet won imagenet - a general purpose image recognition dataset. Whereas Dannet worked on specific domains- Chinese handwriting recognition, mitosis etc. So Dannet is not comparable in impact to Alexnet at all. ReLUs are in all complex DNNs now - wouldn't have happened if ReLUs are redundant as implied by Schmidhuber.

https://people.idsia.ch/~juergen/computer-vision-contests-wo...


You conveniently left out the plagiarism part regarding ReLUs:

> 8. LBH devote an extra section to rectified linear units (ReLUs), citing papers of the 2000s by Hinton and his former students, without citing Fukushima who introduced ReLUs in 1969[RELU1-2] (see Sec. XIV).

This is only one of many concrete examples given.

DanNet obviously worked on all kinds of image data, otherwise it would not have won all those competitions before the similar AlexNet. However, the CNN pioneer was Fukushima who introduced CNNs and ReLUs.


> You conveniently left out the plagiarism part regarding ReLUs:

You are arguing 100% in bad faith. I specifically cited #9 because it is absurdly unreasonable to counter the entire gish galloping. Your counter argument is to cite #8

You are well aware that not citing an earlier paper with different implementation and results is not plagiarism. There is absolutely no evidence of plagiarism anywhere.

If you drop the word "plagiarism" and replace it with "priority in invention" the allegations still don't stick, as I explained for #8.

It is one thing to say that Schmidhuber did not get due credit, but quite another to call Hinton a plagiarist.

Following up on your logic is absurd, because I can conveniently state that back prop is just the chain rule in differentiation by Newton and everyone else has plagiarized from him. And ReLU was plagiarized by Fukushima from neuroscience researchers.

DNNs are an empirical engineering technique. Priority in proposing a technique is not remarkable. Most techniques like ReLU and back prop are straightforward to develop and understand. What matters is the absolute performance gain over SOTA techniques.

I cannot claim priority over Fazlur Rahman if I had the idea to build skyscrapers using a tubular design. This is not theoretical physics. Building a sky scraper is an engineering problem. You have to actually build the sky scraper to claim victory.


The key result is not the introduction of ReLU, this is a misdirection. The key result is the outstanding performance on a general image data set by Alexnet. If the predecessors did all of the work, why was Hinton's lab the first to produce these results.

ReLU is an absurdly simple gate. The question revolved around its effectiveness, which was proven by Hintons lab.

The key result is the outstanding performance on imagenet. If Schmidhuber was the actual pioneer, why wasn't he able produce the same results before Hinton?

NN were known to work for hand writing recognition since the 90s (including papers by Hinton). Dannet being able to do it for Chinese characters in 2010s is unremarkable.


> The key result is not the introduction of ReLU, this is a misdirection. The key result is the outstanding performance on a general image data set by Alexnet. If the predecessors did all of the work, why was Hinton's lab the first to produce these results.

When Fukushima published ReLUs in 1969 and CNNs in 1979, there were neither decent computers nor competitions. No excuse for not citing him.

> ReLU is an absurdly simple gate. The question revolved around its effectiveness, which was proven by Hintons lab.

Many good things are simple. They should have cited the creator, no matter how much they profited later from faster computers or novel datasets or the like.

> The key result is the outstanding performance on imagenet. If Schmidhuber was the actual pioneer, why wasn't he able produce the same results before Hinton?

Did his team ever participate in imagenet? Apparently not. He writes about DanNet: "For a while, it enjoyed a monopoly. From 2011 to 2012 it won every contest it entered, winning four of them in a row (15 May 2011, 6 Aug 2011, 1 Mar 2012, 10 Sep 2012)"

> NN were known to work for hand writing recognition since the 90s (including papers by Hinton). Dannet being able to do it for Chinese characters in 2010s is unremarkable.

The remarkable thing is that "DanNet was the first pure deep CNN to win computer vision contests." Before DanNet, other methods won the competitions. DanNet changed that.

However, the CNN pioneer was Fukushima who introduced the CNN architecture and ReLUs. Hinton did not cite him.

> You are well aware that not citing an earlier paper with different implementation and results is not plagiarism. There is absolutely no evidence of plagiarism anywhere.

So what exactly constitutes plagiarism? It's not about good or bad faith, it's about checking who did it first. If you are using building blocks from previous papers, you must cite them. Schmidhuber cites the difference between unintentional [PLAG1] and intentional plagiarism [FAKE2]:

[PLAG1] Oxford's guidance to types of plagiarism (2021). Quote: "Plagiarism may be intentional or reckless, or unintentional."

[FAKE2] L. Stenflo. Intelligent plagiarists are the most dangerous. Nature, vol. 427, p. 777 (Feb 2004). Quote: "What is worse, in my opinion, ..., are cases where scientists rewrite previous findings in different words, purposely hiding the sources of their ideas, and then during subsequent years forcefully claim that they have discovered new phenomena."

More quotes: "If one "re-invents" something that was already known, and only becomes aware of it later, one must at least clarify it later, and correctly give credit in follow-up papers and presentations." ... "And the authors did not cite the prior art - not even in later surveys."

This is crucial. Even later they did not cite the original sources.

> Following up on your logic is absurd, because I can conveniently state that back prop is just the chain rule in differentiation by Newton and everyone else has plagiarized from him.

The paper apparently both anticipated and corrected your claim (it wasn't Newton): "Some claim that "backpropagation is just the chain rule of Leibniz (1676) & L'Hopital (1696)." No, it is the efficient way of applying the chain rule to big networks with differentiable nodes (there are also many inefficient ways of doing this). It was not published until 1970.[BP1]"

[BP1] S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 1970. See chapters 6-7 and FORTRAN code on pages 58-60. PDF. See also BIT 16, 146-160, 1976. Link. The first publication on "modern" backpropagation, also known as the reverse mode of automatic differentiation.

> And ReLU was plagiarized by Fukushima from neuroscience researchers.

Really? Can you prove this? Do you have a reference?


I believe it's reasonable for him to request a citation if significant results of his works are quoted.

LeCun would probably also be upset if he read about his work in the newspaper, but mis-attributed to someone else.

Also, millions in funding money depend on stuff like this. If LeCun can pretend to be the inventor, he gets the grant money. If Schmidhuber gets the citation he seeks, he can use that to apply for new funding. So it's not just prestige, these citation issues have a large monetary impact, too.


[LEC22a] is a position paper. LeCun is expressing his view of how to go forward. It's not a technical paper describing something that works. Ideas he describe have been floating around decades and are considered good ideas, except nobody has made them work, thus the position paper for the future.

Schmidhuber points out everything I said above. JEPA is old idea, etc. He also points out that the paper is "tabloid" article. Schmidhuber does not deserve being quoted for general ideas because he wrote paper of few of them.


Anyone who had a passing interest in ANNs in the 90s would not miss Hinton or Sejnowski or older work of Rosenblatt or Kohonen etc. They are a different league. TBH i did not encounter schmidhuber or bengio or lecun (the latter only in passing)


In my opinion the difference that Hinton students made was using GPUs to run deep learning models. First in 2010, when publishing Theano at NIPS, making deep learning on a single computer node with a large GPUs very approachable ( using a Python framework, precursor to TensorFlow). Then in 2012, with Alex Krizhevsky masterfully using two GPUs to build the landmark model AlexNet, which outperformed traditional computer vision models and was easy to use in practice.


Funny ...

According to [1], the papers [42,43] showed that plain backprop was capable of solving complex tasks without pretraining on a GPU.

From [1]:

> Our team then showed that good old backpropagation [A1] on GPUs (with training pattern distortions [42,43] but without any unsupervised pre-training)

I can't verify since I have a barely stable connection and little battery, but somebody else should be able to.

[1] https://people.idsia.ch/~juergen/firstdeeplearner.html

[42] H. Baird. Document image defect models. IAPR Workshop, Syntactic & Structural Pattern Recognition, p 38-46, 1990

[43] P. Y. Simard, D. Steinkraus, J.C. Platt. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. ICDAR 2003, p 958-962, 2003.


Sorry what's your contention? or do you have one?

Backprop is backprop on CPU or GPU - it just runs (way) faster on GPU. So while you can do some stuff with NN's on CPU, you can do more stuff with NN's on GPU.

Further, the sources you reference are from before advances made in software and hardware for massively parallel GPGPU architectures (CUDA), so it's kind of a moot point anyways. Those researchers may have had access to such a system, but you definitely couldn't just `pip install pytorch` on your gaming GPU in the 90's...


> Sorry what's your contention?

Time of publication. I merely pointed that Schmidhuber et al's results were published long before Hinton et al's work (I included year of publication for a reason), ergo the comment that was in support of Hinton was refuted and my comment gives further validity to Schmidhuber's claims of misattribution and plagiarism, while pointing at yet another paper tiger in the claims that the three pioneered the field.

If anything, Schmidhuber may have deserved a Turing award on his own, and what we see is the result of politics.

This is pure speculation, but to me the whole ordeal hints at manipulation from the companies where 2 of the 3 worked at, which would essentially increase the pool of candidates the respective companies have only by association with their employee.


Ah, that makes sense. Thanks for clarifying.


Dan Ciresan’s DanNet ran on GPUs before Hinton’s students



Isn’t backprop just reverse-mode AD, which has been around since the 60’s? I get the impression that a lot of physicists and scientific computing researchers have been rolling their eyes at various ML “breakthroughs” over the years.


My point was not backprop, but time of publication.

However, "reverse-mode" AD is not as simple as just caching results.

> BP’s continuous form was derived in the early 1960s (Kelley, 1960; Bryson, 1961; Bryson and Ho, 1969). Dreyfus (1962) published the elegant derivation of BP based on the chain rule only.

> explicit, efficient error backpropagation (BP) in arbitrary, discrete, possibly sparsely connected, NN-like networks

But that wasn't close enough to current BP

> BP’s modern efficient version for discrete sparse networks (including FORTRAN code) was published by Linnainmaa (1970). Here the complexity of computing the derivatives of the output error with respect to each weight is proportional to the number of weights. That’s the method still used today.

See [1] for more details.

I do not know the exact implementation details of Linnainmaa's BP, but there are all sorts of improvements to plain reverse mode auto-grad through stuff like Jacobian-Vector products that save an enormous amount of memory.

Nocedal's "Numerical Optimization" covers a great deal on how to implement autograd, and I can easily recommend it to anyone interested.

[1] https://www.reddit.com/r/MachineLearning/comments/e5vzun/d_j...


Small correction, Theano was made in Bengio's lab.


I recall watching a video of Schmidhuber presenting. He comes onstage, introduces himself, and then makes a joke. ("You again!") It's a funny joke. And it's self-deprecating. I remember thinking to myself, "This is a formidable Austrian." That and the Gödelmachine: I'm a fan.

There's a simple and obvious way to solve this: cite the papers. It's traditional, it's noble, it's not hard, and it makes you look good.

I mentioned a couple of weeks ago something I learned reading an old interview by Bill Moyers of Isaac Asimov, that in ~1900 three different people, Hugo de Vries, Carl Correns, and Erich Tschermak von Seysenegg, discovered the laws of genetics, checked the literature, and found the work of Mendel from half a century earlier. Each of them credited Mendel.


You do not have to cite every paper under a specific topic nor do you have to cite the first paper that introduces a paper. Why? Maybe those papers are not well written. Maybe a recent paper covers the idea better, etc.

So no, he does not have to be cited just because.


> You do not have to cite every paper under a specific topic

Strawman.

> nor do you have to cite the first paper that introduces a paper.

Did you mean "first paper that introduces a concept."? In any event, also a strawman.

> Maybe those papers are not well written.

Hypothetical. (Have you read the papers? I've read some and I would say they are well-written.)

> Maybe a recent paper covers the idea better, etc.

Wonderful! Should still cite prior work.

> So no, he does not have to be cited just because.

Strawman.

If you're going to argue at least do it well.

The options are: cite him, slight him, or prove him wrong.

I think he's made it clear that he won't accepts slights without a fight, so one can either show him where he's wrong, cite him as he plainly makes clear one should, or admit that he's being shut out for unscientific reasons.


I'm not sure you know what Strawman means. Are you a researcher? Have you published research papers?

>> The options are: cite him, slight him, or prove him wrong.

Again, are you a researcher? These are not the options. Also what even is "slight him" or "prove him wrong"? My comments above are about citation practices not this one researcher in particular.

>> I think he's made it clear that he won't accepts slights without a fight, so one can either show him where he's wrong, cite him as he plainly makes clear one should, or admit that he's being shut out for unscientific reasons.

There is nothing to fight about nor nothing to "prove". There are many reasons to cite an author and being the first to publish a paper many years ago about a topic does not automatically deem citation. That's not how science works. It's how (unfortunately) the academic churn of paper writing works at times.

FYI: it's not "unscientific" to not cite a paper. It's almost impossible to cite all relevant works, so the author of a paper selects the most relevant. Obviously this is biased and can become political (e.g., Schmidhuber), but flamming online about not being cited doesn't mean you must cite an author.


> A straw man (sometimes written as strawman) is a form of argument and an informal fallacy of having the impression of refuting an argument, whereas the real subject of the argument was not addressed or refuted, but instead replaced with a false one.

https://en.wikipedia.org/wiki/Straw_man

Good day sir.


That's my point. I didn't replace the argument with any other. My original comment concerned citation practices whereas you're arguing about something else.

So it's not a straw man, but rather that we're having different discussions.


It is important to note that, as of right now, the paper is full of citations and references, including to Schmidhuber [0].

[0] https://openreview.net/forum?id=BZ5a1r-kVsf


The title of the article is "LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015" but had to be shortened to fit the character limit of Hacker News titles.


There are 7 pages of citations. Not every somewhat related thought anyone has ever had or published will be cited. But incidentally, it is nerdily interesting to think of the bibliography section as a place to display contempt by omission that is subtle to most but glaring to few.


lecuna (noun): An empty space in a list of citations where the works of Jürgen Schmidhuber should appear.


This is somewhat funny and very smart and can only be downvoted by people who don’t speak fluent Latin.


For those who never had a Latin class, could you explain the joke? Thanks!



Lacuna is in proper English too though


Very clever.


That's because it really isn't an academic paper. He clearly stated that he was publishing it as a manifesto of what he would be researching going forward.

Would it be different if he just published it on a Medium or Substack blog?

I'm not an academic so I don't understand these weird feuds but seems pretty ridiculous to call someone out for lack of citations when they clearly disclosed the nature of that writing as somewhat unofficial and opinion-based. Maybe there's something I don't understand about those academic circles.


There is a giant leaderboard called “citation count”. You get a real time “score” that can define your career prospects.

So, like any system with a “score”, people become warped, they focus on the score above all else. So this person thinks they aren’t getting their magical citation “points” and are mad.

Nobody owns ideas and very few ideas are original, or much more important, timely.

If anybody is getting screwed it’s the science fiction authors!


It's him again.

That must be enough to give context on who I am talking about.

I do not know what to tell about him. Sure he has explored a lot of topic and never got the limelight like others. But I think it's time for him to stop gaslighting about other researchers just because they didn't cite his papers. Sure, they could have never read it.

He can make his work shout by publishing better papers and better design.

He could have written the article before too. But why did he write it only after LeCunn wrote it?

I am sorry I haven't read the article because it's him again.

Edit: After glossing through it, it seems he is doing the same thing that he did with GAN.


We are scientists and can use names. 'him' is Jürgen Schmidhuber. He's a luminary of ML, but he has entered into several attribution controversies. He famously accused Ian Goodfellow of appropriating his work to create generative adversarial networks (GANs) in a public venue (a NeurIPS workshop).

I agree with the parent comment. Ultimately, Science is about advancing the human condition more than celebrating great individuals. While I'm sympathetic to Schmidhuber's feelings he isn't given sufficient credit, he's still an internationally recognized, fully tenured professor.


>Ultimately, Science is about advancing the human condition more than celebrating great individuals.

Giving proper credit to prior work is a fundamental component of scientific progress. Just look at the Chinese academic system to see what happens to science when people are given free reign to plagiarise.


Yes, like Watson & Crick and lots of others have shown over the years by lighting the way with proper accreditation. /s

Science is a means to an end, the 'giving proper credit' bit has been under fire since the day that someone figured out that citations can bestow fame (hardly every fortune) and given the low stakes it is to be expected that that is the battleground. But the number of PhD students that have been screwed out of proper credit to further aggrandize their academic masters can probably not be enumerated any more, so let's not pretend that proper credit is the coin that makes academia function.


>Yes, like Watson & Crick and lots of others have shown over the years by lighting the way with proper accreditation. /s

This just in: A wrong thing become right when famous people (allegedly) do it. More at 9.

And that's aside from the fact that you're wrong on the particular case of W&C.


> This just in: A wrong thing become right when famous people (allegedly) do it. More at 9.

I'm pretty sure the claim was that it's not so "fundamental", not about right or wrong.


Watson and Crick gave all necessary and proper attributions.


The parent is referring to the fact that Watson and Crick relied on work of Rosalind Franklin without crediting her.


Watson and Crick's 1953 Nature paper says "We have also been stimulated by a knowledge of the general nature of the unpublished experimental results and ideas of Dr. M. H. F. Wilkins, Dr. R. E. Franklin and their co-workers at King's College, London."


Also, Franklin's paper follows W&C in Nature: https://www.mskcc.org/teaser/1953-nature-papers-watson-crick...


I know what they meant. The W&C paper literally thanks her for her work. Stop spreading falsehoods.


> Just look at the Chinese academic system to see what happens to science when people are given free reign to plagiarise.

Can you explain what this means? It's a pretty broad statement.


Please elaborate on what has happened in the Chinese academic system?


>He famously accused Ian Goodfellow of appropriating his work to create generative adversarial networks (GANs) in a public venue (a NeurIPS workshop).

Here is his take on what happened: https://people.idsia.ch/~juergen/scientific-integrity-turing...


I'm sorry if this sounds rude, but "Science is about advancing the human condition" is a bit of a grandiose and pretentious way to put it, people usually do this when they want to get away with something.

Your view of what Science is about is aspirational, it's what we (and not all of us at that) hope is true, not what's actually true. Science is motivated by a mixture of (a) personal prestige and social status-seeking, (b) personal pleasure (some people simply delight in studying and understanding intricate patterns and phenomena), and (c) national prestige of nation states and profit motives of wealthy corporations.

I, for one, don't like it when people pull out the "I'm helping others" bit when motivating things. Everything humans do is pleasure and reward-seeking, even charity and helping other is pleasure. I would like if people were always honest about this, people love helping stray cats not because they are a helpless life (the fly stuck in a spider's web is a helpless life), but because cats are amusing and interaction with them gives (some of) us lots of pleasure. Let the "helping others" motivation always be something other people ascribe to you, your own self-stated motivation should always be "Because I wanted to", if you want to be honest.

>Science is about advancing the human condition more than celebrating great individuals.

This gives the impression that those 2 goals are somehow in tension, or that they compete for resources. But giving credit, even if you don't know prior work and are actually re-discovering it from scratch, is trivial in this day and age. Paper databases, keyword search, text-similarity search. It's not hard, not compared to actually writing the paper.

I'm not actually interested in the specific contraversy and other people here are pointing out that Lecun's paper is a position paper which are generally subject to more lax citation standards, but your statment just irked a pet peeve of mine.


I know Schmidhuber is an important figure in deep learning and I barely know anything about deep learning. The guy is on top of the world and could find better things to do than whine.


Let me plagiarise your work then and tell me what you think.


Well said.

Attribution is important but it's impossible to know all the literature that is out there and even when you've tried your best to seek out prior arts there is always a possibility that you've missed something or not understood the connection with something previously read. Heck even when something is similar you may not cite it since it's so old the relevance is no longer there for a working research paper.

I've always thought that Schmidhuber was toxically uncharitable - what is the point of accusing and blaming? If someone has a similar idea to mine I'd celebrate them, seek to collaborate and encourage. Wouldn't that mean your research program is getting advanced for free! Heck you should thank them for doing your work for you.


>Attribution is important but it's impossible to know all the literature that is out there and even when you've tried your best to seek out prior arts there is always a possibility that you've missed something or not understood the connection with something previously read.

It's not like Schmidhuber's papers are obscure.

>I've always thought that Schmidhuber was toxically uncharitable - what is the point of accusing and blaming

You're engaging in toxic victim-shaming. What, you expect him to just bend over and accept people stealing his ideas without giving him credit? Fundamentally the researchers who stole from him were engaging in unethical behaviour and shouldn't be allowed to get away with that without consequence.


> people stealing his ideas

So, ideas can now be stolen after all? I thought the HN consensus was that ideas can't be owned, but implementations can be. Or is it somehow different when we're looking at science?


Yes, it's very different for two reasons: 1) many ideas in science are highly non trivial compared to all sorts of crap that gets patented and 2) as scientists all we usually care about is getting some credit for our ideas. We want them to be known and used by everyone though.

Quite a huge contrast to companies that want their ideas for their own so only they can profit from them.


This was not the attitude of top guys in mathematics when I was at university.

Instead, one guy, who won a bunch of major scientific prizes in mathematics was of the view that people when they get scooped by people publishing in 'obscure' languages or non-English journals, were still scooped and that they can just shut up about proving things. Another guy of similar caliber was happy to read and figure out the ideas of papers in Russian and French maths journals, when it was relevant to his work, even though he couldn't read Russian.

Meanwhile, I've heard people speak of Schmidhuber's stuff as obscure because some stuff was in German, which is of course much easier than Russian for English speakers.


>Sure he has explored a lot of topic and never got the limelight like others. But I think it's time for him to stop gaslighting about other researchers just because they didn't cite his papers.

I can't believe you're defending academic plagiarism. His papers were not obscure, they were foundational. So what if he has a bad attitude; you would too if people kept stealing your ideas without giving you any credit.


If anyone is looking for more background: it's Jürgen Schmidhuber. https://news.ycombinator.com/item?id=26804496


How can there not seem to be a kind of consensus around this?

There were tons of citations, why does a simple idea written down need to cite work from 20 years ago?

The threshold of tolerance here is basically ridiculous, authors will be more predisposed to research and citations than to actual thinking.

I suggest if someone feels left out, they can make a claim and the paper can be 'amended or not' and that's that.

How on earth is someone supposed to have an 'idea' without a team of grad students ...


Uh oh.

Jürgen Schmidhuber's Home Page - https://news.ycombinator.com/item?id=30250395 - Feb 2022 (8 comments)

Turing Oversold? - https://news.ycombinator.com/item?id=28522761 - Sept 2021 (303 comments)

Schmidhuber: The most cited neural networks all build on work done in my labs - https://news.ycombinator.com/item?id=28465943 - Sept 2021 (1 comment)

1931: Kurt Gödel shows limits of math, logic, computing, AI - https://news.ycombinator.com/item?id=27536974 - June 2021 (296 comments)

Who Invented Backpropagation? (2014) - https://news.ycombinator.com/item?id=27127611 - May 2021 (1 comment)

Critique of 2018 Turing Award for Drs. Bengio and Hinton and LeCun - https://news.ycombinator.com/item?id=26804496 - April 2021 (20 comments)

Verifying Schmidhuber's Critique of Turing Award for Bengio and Hinton and LeCun - https://news.ycombinator.com/item?id=23649542 - June 2020 (7 comments)

Critique of 2018 Turing Award for Drs. Bengio and Hinton and LeCun - https://news.ycombinator.com/item?id=23642335 - June 2020 (1 comment)

Schmidhuber: Critique of Honda Prize for Dr. Hinton - https://news.ycombinator.com/item?id=22932838 - April 2020 (1 comment)

Deep Learning: Our Miraculous Year 1990-1991 - https://news.ycombinator.com/item?id=21166852 - Oct 2019 (34 comments - tlb has a nice take in the top comment there)

Lecun.ml Redirects to Schmidhuber's Website - https://news.ycombinator.com/item?id=19884691 - May 2019 (1 comment)

Yann LeCun, Geoffrey Hinton and Yoshua Bengio win Turing Award - https://news.ycombinator.com/item?id=19499515 - March 2019 (146 comments)

Jürgen Schmidhuber on Consciousness (AMA on Reddit, 2015) - https://news.ycombinator.com/item?id=19241278 - Feb 2019 (3 comments)

Jürgen Schmidhuber says he’ll make machines smarter than us - https://news.ycombinator.com/item?id=17100132 - May 2018 (26 comments)

This Man Is the Godfather the AI Community Wants to Forget - https://news.ycombinator.com/item?id=17089607 - May 2018 (1 comment)

This Man Is the Godfather the AI Community Wants to Forget - https://news.ycombinator.com/item?id=17073202 - May 2018 (1 comment)

Deep Learning Conspiracy - https://news.ycombinator.com/item?id=16962938 - April 2018 (1 comment)

A Computer Scientist’s View of Life, the Universe, and Everything (1999) - https://news.ycombinator.com/item?id=14094373 - April 2017 (73 comments)

AI Pioneer Wants to Build the Renaissance Machine of the Future - https://news.ycombinator.com/item?id=13412050 - Jan 2017 (28 comments)

Jürgen Schmidhuber: The Problems of AI Consciousness Is Already Solved - https://news.ycombinator.com/item?id=13316530 - Jan 2017 (1 comment)

When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ - https://news.ycombinator.com/item?id=13066646 - Nov 2016 (50 comments)

When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’ - https://news.ycombinator.com/item?id=13056794 - Nov 2016 (2 comments)

Ye LeCun replies back to Juergen Schmidhuber blog post #DeepLearningDrama - https://news.ycombinator.com/item?id=9824660 - July 2015 (3 comments)

Critique of Paper by “Deep Learning Conspiracy” - https://news.ycombinator.com/item?id=9807326 - June 2015 (33 comments)

Deep Learning Talk by Jürgen Schmidhuber, Director of the Swiss AI Lab IDSIA - https://news.ycombinator.com/item?id=7694270 - May 2014 (1 comment)


So this is a universal thing in science, nothing new under the sun etc.

What’s changed is that things move faster these days, so the originator isn’t dead yet when you rediscover whatever they came up with.



LeCunT


We've banned this account. Please see https://news.ycombinator.com/item?id=32016347.


[flagged]


Could you please stop posting unsubstantive comments to Hacker News? You've done it a fair bit and we ban that sort of account. I don't want to ban you, because you've also posted some good things, but we need you to stick to the rules when posting here.

We want curious, thoughtful conversation. If you don't have anything of that sort to say, your options are to (1) find a different topic you do have something curious and thoughtful to say about, or (2) not to post anything.

https://news.ycombinator.com/newsguidelines.html


Yikes. What's the gossip?


LeCun being LeCun.


Personal attacks aren't allowed here, and please don't post unsubstantive comments generally. We're trying for a different sort of discussion.

https://news.ycombinator.com/newsguidelines.html


Who cares my shit doesn’t get cited all the time but I don’t blog about it like a bitch.


Posting like this will get you banned here, regardless of how wrong someone is or you feel they are.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: