Back in the late 2000s I had my second encounter with the FBI (my first was talked about in another comment I wrote earlier today) I was hanging out with the world's leading web spammers and in those circles the 'replace words with synonyms' method was known but considered too low quality for web spam. Works for scientific literature though, where standards are lower.
Some journals are serious, some journals are pay to win, some journals are a clickbait list, sometimes it's difficult to classify. Anyone in the area know the journals and know the research groups and know whom to thrust. If you are outside the area, it's a big problem. I'm completely clueless when someone post papers in other areas here. (Not even the name of the university is a safe indicator.)
This reminds me that 20 years ago if you wanted to be "famous" you could buy 30 weekly minutes in cable tv, probably at Sunday 3 am in channel 247. (Or pay more to get a better spot.) So you can claim you were a tv star! Nobody took them seriously, and there were somewhat confiable metrics to count the number of viewers.
The problem is to define what a serious journal is. It's a hard problem. Every year we have to evaluate applications of T.A. in "fair" way, and how to count the papers always almost start a civil war in the department.
My understanding is that this is the case with MDPI. For the most part the actual "journals" that publish more than one issue are about as good as other journals. The special issues are mostly trash.
> There is a requirement to assist various types of solar radiation and wind speed measurement and predictions, such as the iterative approach, counterfeit consciousness method, and so on.
Springer has had issues for a long time. During my PhD, within the research field of Genetic Programming, there is an author who only published to Springer. This author always had the best results in the field. Reliably, when ever some new results would come out, this person would publish to Springer with better results. Vague methods, no reproducibility. So when I went and built a non-GP algorithm and improve the state-of-art by orders of magnitude, low and behold this person was able to improve upon my results. It seems as though there is no actual editorial activity over there at all
When Docker first came out, one of my advisors and I pondered how we might use it to aid in the reproducibility crisis in academic research. I don't think anything ever came of it, certainly the situation seems to have deteriorated since. There are a lot of good researchers out there, but I fear many of them do the same dealing with the dumb system we find in large bureaucracies in the private sector
In some ways it is worse. Reviewers are expected to put in free labor for the journals and then we have to either put our own work behind a paywall or "pay for the privilege" to have it be open access.
Perhaps we should take a stab at building open-review features, reviewers, journals, reproducibility, and transparency around Arxiv. They have a survey banner on the site right now, lets suggest some ideas! (edit: sadly, the survey is quite limited and irrelevant to improving the practice of research)
> When Docker first came out, one of my advisors and I pondered how we might use it to aid in the reproducibility crisis in academic research.
I’m working on my PhD in computer science, programming languages specifically. We’re lucky that we have the easiest time reproducing our benchmarks and experiments. The big venues in my field all encourage authors to submit artifacts as docker containers to reproduce all the tables and figures from a paper. Papers that reproduce get special badges on the first page. I’ve reviewed a few artifacts and it can be really nice with docker.
So, hopefully other fields take note of what we’re doing to combat reproducibility. CS is fortunate that verifying some claim might be as simple as running a shell command.
Haha. No, we're just lucky that we get to define our systems so well and our experiments are typically small. We're often closer to pure math than most of CS.
Broadly speaking, the evolutionary computation literature is in a dire state. Terrible methods sections, nonsequitur results, endlessly tortured biological analogies. The real successes in the field are old at this point and a lay person would have a hard time finding even a competent review paper to point them in the right direction.
I talked about this in my paper. Incredibly, a paper by the top GP researchers had come out the year before (iirc) talking about these very problems. They offered a set of standard benchmarks they hoped the field would adopt, so naturally I did, but I haven't paid attention since I graduated so I don't know if it actually took hold.
Benchmarks. That’s one of the big problems in the field; since it’s so hard to get people to agree on what we’re even trying to accomplish with an evolutionary search heuristic, it’s difficult to measure performance objectively.
Yup, methods and reproducibility as you have also mentioned. My work focused on Symbolic Regression, the problem, which I tried to separate from Genetic Programming, the algorithm. The literature refers to SR as the algorithm, but I think that is incorrect. I wrote a deterministic algorithm Prioritized Grammar Enumeration and put my code & data on github. https://github.com/verdverm/pypge (pdf there too, don't tell ACM)
Determinism is interesting; I’ve often wondered how much benefit we actually obtain from pseudo-randomness, versus it being a way of coping with the fact that we don’t really have a convincing account of why GAs work.
The author is from China. There are many MANY great things about China and the work they do. But, there is also significant fraud and IP theft, not to mention the policy of accelerationism the CCP has taken against the US. These things suggest a myriad of possible bad faith motivations.
Can anyone offer some insight as to why that junk paper was submitted in the first place? Are the authors trying to farm reputation somehow? To what end, and does it actually work? Are the authors even real people? What's their goal?
Well, the authors' byline says they're from Northeastern University, Shenyang 110169, China and Faculty of Management and Economics, Dalian University of Technology, Dalian 116024, China. I've heard that Chinese universities sometimes have explicit publication quotas or offer cash bonuses for publications. Having never worked at a university in China, I cannot personally verify that though.
MDPI also spams anyone it thinks might be willing to submit a paper or serve as an editor, and does not seem to care much about the quality of the submissions it receives.
Same as every other academic: publish or perish. Collect more citations and note them in your next grant proposal.
The modern academy is fucked, the whole system is being gamed and very little is being done about the mess created. The author here is fighting a tough battle and I wish them continued success.
Publication metrics. This has been a problem for at least the last few decades, especially when it comes to papers from some parts of the world. Before the current breed of low quality journals, we saw this happening with conferences that were often branded with professional societies like IEEE, but if you looked carefully it would be “IEEE Section of (random place in Asia)” and not a conference sanctioned by IEEE itself. Those used to be full of garbage plagiarized papers and junk. I think the idea is that nobody usually reads the papers when they make a decision based on publication metrics, so pumping a CV with garbage often goes unnoticed. It’s not a wise practice: in many places getting caught doing this would be a career ender.
Academia is in trouble.Many fields(ha!) of study
are no longer proffesional, ie: self governing and
self regulating in an open, coherant fashion. Cosmology is dead by the hand of webb. Unearned tenure is rampant. And behind the scenes, the corpretisation of acedemic publishing is squirming in there seats, whining about putting adds into scientific papers.Then there is the weaponisation
of intelectual property threatening the very idea
of scientific publishing, sure to be rolled out for federaly funded university research, to protect those poor egg heads who just dont understand how there nobel work can be stolen and turned against them and the rest of our hard working people, and of course the new model, will give them a portion of the add revenue, giving them greater accedemic freedom, "sciffi" anyone?
Ha ha ha , no chance of that. which is worth the speculation that LLM's are built on everything, including off the cuff ramblings like mine, and worse, much worse......and of course LLM's are feeding back on themselves.
To defend myself, while my spelling, and punctuation, are , are, are, spontainious, ha!, and often creative one of's, I do attempt to be strict on my own fact checking and try to stick to things that are verifiable useing first principals and generaly accepted practices.And this practice is whats missing from academia(adademonia, academonics), and the willingness to fail, playing to loose if you will.
If the outdated 3b model running on my phone can identify the tomato herding excerpt as nonsense, without fancy prompting, maybe they should at least run papers through an automated review process? I can loan them my phone, as long as it gets credit for the review.
There is no excuse for this kind of abdication of duty on the part of publishers. Their entire value proposition is based on their lack of credulity. This pretty much means they serve no purpose at all. They should all just put their shit in boxes, go home, and turn off the lights. Oh wait. Maybe that’s what happened.
Each instance of this sort of thing provides more ammunition for reducing funding for academia. Expect big cutbacks in US government support for research.
Yeah, all these academics that are only in it for the big paycheck need to be rooted out. The real problem with academia is that there's too much money in it. Researchers are simply rolling in cash, if they had to worry about where their next meal came from they might reconsider. PHD students famously have no understanding of adversity, and if we managed to fix how secure and lucrative academic careers were, we might make some real progress here. We need more adjuncts dying.