Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] ChatGPT has 190 citations on Google Scholar (scholar.google.com)
55 points by georgehill on Nov 16, 2023 | hide | past | favorite | 26 comments



The actual article generating all 190 citations:

https://link.springer.com/article/10.1007/s12195-022-00754-8

> "Author's note: This article was written by the ChatGPT chatbot, in response to prompts from MK. That human-chatbot conversation is presented here, without editing."


Embarrassing that they published that. Should not have been more than a blog post.


I actually think the opposite.

This underlines several things at once, the power of LLMs applied to biology and the future of scientific papers integrity on the other hand.


>the power of LLMs applied to biology

Could you substantiate this? There is no connection to biology/bioscience/etc. in the text. That is why its embarrassing that they published it - it has nothing to do with the journal. There are so many other journals where it might make sense to publish such a thing (AI/education/science studies/etc.). But here it just seems like they couldn't manage to get a proper editorial for the issue.


I wouldn't exactly call it "powerful" with respect to LLMs applied to biology. It reads more like an opinion piece with some background context (which is not bad) and fake references.

Even though it was published in Cellular and Molecular Bioengineering, it really doesn't have anything to do with Cellular or Molecular Bioengineering.


[Author’s note: These are not real references, unfortunately.]


That's actually a very interesting paper.


This is one paper with 190 citations and one paper with no citations.

The paper is only 2 pages in a prompt-response format, un-edited. The prompts are specific and ask for paragraphs about chatbots, AI and plagiarism. The author's note at the end, after ChatGPT was asked for references and gave several, is:

> Author’s note: These are not real references, unfortunately.

Edit: One more point -- Even though it was published in Cellular and Molecular Bioengineering it really doesn't have to do with either. It's a quick read.


Is it right to recognize ChatGPT as a scholar? Shouldn't we just treat it like any other software tool?


We absolutely should treat it just like any other software tool.


- software tools are cited (you often will site a paper describing the tool). If a tool generates a value and then later a bug is discovered, it's important that the tool was cited.

- one purpose of a citation is to make a clear distinction which work/words are your own and which are not. Are you making a statement because it's based on past work, your own inference, or the hallucination of a LLM? With ChatGPT it's easy to get confused

- there is also the more general issue of academic integrity. You shouldn't submit other people's/machine's words as your own

Maybe more correctly ChatGPT should be on the authors' list ;)


They’re all from this paper, which is written by ChatGPT with some prompts from the author: https://oa.mg/work/10.1007/s12195-022-00754-8


190 citations sounds impressive but considering they mostly come from possibly the two fields that gathers citations the quickest - biomedical sciences and artificial intelligence - this shouldn't be too surprising?

Shalosh B. Ekhad, OTOH, is the real deal -

https://sites.math.rutgers.edu/~zeilberg/ekhad.html

https://en.wikipedia.org/wiki/Doron_Zeilberger


Is this the beginning of the LLM ouroboros? Does ChatGPT do anything different with training data it produced?


How would it know? If anything this is one of the only places where it’s not an ouroboros because of the citations.

But yeah, I have to imagine LLM-produce material is already on its way back into the next models. I wonder how this problem is perceived among the people building these systems. Given that they’re not yet near ASI or AGI yet, it’s gotta be close to an existential issue, no?

Maybe not “your company will collapse,” but “you will not be able to get further improvements.”

Would love to hear thoughts from someone who is actually addressing this (or knows why we don’t need to).


There is a demand for Low Background Steel, steel produced before the Nuclear Bombs were dropped, for use in Geiger counters and the like. It's usually salvaged from the wrecks of sunken WW1 ships, as that is the only way to guarantee no radiation.

The same will happen to data sets from pre-2022, everything beyond that will be rapidly polluted with AI nonsense.


In this instance, it would know because of the citations.


As I noted above :)


A paper produced by ChatGPT is plagiarism taken to its logical extremity.


I've wrote some thoughts on this topic down a few months ago:

https://www.monsterwriter.app/chatgpt-in-academic-writing.ht...


First listing: 'ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?'


I love “World Renowned Scholar”.


ChatGPT - World Renowned Scholar


ChatWRS+


Great, we have a stochastic parrot writing scholarly articles.

This will not make everything on Internet even more difficult to separate from bullshit. /intense sarcasm


Ah great, so they are now spamming everything. Not long ago that was illegal.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: