When I read this post, I imagined the author talking about hard science like physics, chemistry, and other fields that require pretty significant investment in proper tools and a strong understanding of deeply complex topics that have been battle-hardened over decades. I don't think the problem with those fields is that we need more amateur cooks in the kitchen.
However, this post was written by a post-doc in social psychology, and is employed in the management division of a business school. I'm ready to believe that a rank amateur could walk into his field and do science just as well as anyone else, and this post makes a lot more sense.
I had a similar thought about the hard sciences (physics graduate dropout) and the inaccessibility of hard sciences to amateurs. Then I moved to the desert and got interested in geology and much to my surprise learned there's all sort of stuff in geology that is available. It's the result of the need to physically go look at rocks (at least for structural). I'm currently working on a new description of the strata at the end of my street because the highest resolution stratigraphic map fails to correctly delineate at least two units of bedrock - eg the map does not show them, but they appear to be present. To figure it out, I've had to take several road trips to compare my neighborhood rock to the exposed units in those locations. I find it quite fun and entertaining.
I'm not sure I'd put geology on the same level as say physics. Similarly I wouldn't put astronomy and astrophysics at the same level. This is not to say that there aren't many problems that can be solved by amateurs, but that the utility is quite disproportionate. I also wouldn't say that soft sciences aren't "real", but rather that they have higher rates of error due to the lack of precision and mathematical theory.
But I also wouldn't say that you need a degree to contribute to any of the hard sciences. The formal education's utilities mainly lie in guidance, structure, connections, and probably most importantly, the ability to dedicate your full time to your education. Being able to ask someone for help is a significant accelerator. All the math you need to get through a bachelor's, and generally even a masters, is available via 20+ year old textbooks. The PhD is a lot of self study too, and a lot of isolation. But at that point you should have a reasonable idea of direction, although getting lost is common and not necessarily a bad thing. It's probably becoming much easier to do this too given that not only are lectures and piratable textbooks are online, but so are the whole educational structure of a degree, which prevents the amateur from getting lost.
Honestly, I think one of the biggest leaps our species will make will happen when we reach post scarcity. Either we'll be glued to the screens or we'll all have the freedom to pursue our passions. Having the ability to dedicate all your time to your education, art, or whatever without worry about your basic needs is quite liberating (even more than school). Maybe even a large social split between the groups. But only time will tell.
I actually disagree with this. Science requires a substantial amount of creativity. In fact, they are necessary. The nature of breakthroughs requisite thinking outside the box and breaking from convention.
What I would say are real limits to creativity are the arbitrary and absurd evaluation metrics we use. Publish or perish paradigm encourages quick work rather than good work, which correlates with a decrease of novelty and substance. Makes the process more incremental. It also encourages railroading as exploration requires more time.
The review process also exacerbates these problems, as the system is not actually aligned with the goals. They are arbitrary noisy filters rather than quality filters, and we have substantial evidence that, putting it nicely, reviewers are good at identifying bad works but not good at identifying good works (i.e. default to reject). This causes one to continually rework a project rather than moving on or even making it better, as the feedback generally isn't about how to make the work better and each set of reviewers wants to add more and more.
The pursuit of novelty is also not only fundamentally anti-scientific but the basis of an existential crisis. To get funding or to pass review, we must be creative in convincing reviewers that we are clever while ideas generally far more obvious after they are revealed to you (regardless of lack of prior works). The need to innovate also prevents replication, which is the basis of science! It is far more difficult, often nearly impossible, to get funding or even approval to replicate works. Science is the process of ruling things out, not confirming things. Each replication has its own set of noise and biases, and this stochastic nature helps reveal hidden confounders. It is also quite essential to the learning process and failing to perform this task results in information being passed like a game of telephone.
But by nature, science is anti-authority and highly creative. You _must_ challenge pre-existing notions (trust but verify) and you must think outside the box. Though I do not blame anyone for missing this given the structures and incentives (shackles) the bureaucratic system has imposed upon us.
> Science requires a substantial amount of creativity.
I would argue that creativity is where all science begins. What more is a hypothesis really, than a science fiction story that has yet to meet reality?
I think a decent theory of information and institutions is the most mission critical thing not only for academia, but for society in general.
With all the resources going to academia, how much of them are focused on understanding the behavior of organizations? It's a reasonably tractable problem that's way underdeveloped, and would allow us to design new institutions that behave the way we want, rather than being beholden to institutions that having to tolerate the ubiquity of organizations with outcomes antithetical to the goals of the members.
The first problem to focus on could be assessing the effect of a decade-long induction process to be considered qualified to make the most miniscule contribution to academia.
I feel like I've noticed academics getting more and more cloistered, many of them rabidly defensive, completely shunning anyone with the slightest criticism of the process, credentialed and not alike. I'm guessing it has to do with (a) the thinning out of public resources, so more competition, more consequences to being wrong, more emphasis on sounding right rather than being right. (b) the fact that in most fields, the point of diminishing returns has been reached on the foundational axioms that define the field. And yet, defining a new field with new foundational axioms is extremely difficult, with scarce resources to explore it.
You could readily define new kinds of mathematics with different axia. The hard part is building off of those axia to make useful theorems. Some axia might lend themselves more readily to proving one theorem versus another, or might make it easier to model one thing versus another. We can see this with ZFC versus category theory. Similarly, analyzing fluids in terms of laminar versus turbulent flow yields distinct benefits. They rely on models with different core assumption. At the foundation of every field (or subfield) is a model & set of core assumptions that allows work to be built on top of them. Market economics versus game theory is a good example. The two models hold fundamentally different assumptions. Game theory is studied in economics departments, though, so its full potential isn't tapped. Any results from game theory can't too directly contradict what's come from market economics if it wants to get taken seriously in an economics department. If game theory had developed as a branch of sociology instead, we might see a very different ideas coming from it. No matter what, it can only be developed in directions is can get funding for.
If we really want to get at the truth, for every assumption we make, we should also define and investigate a theory that has the opposite assumption. For instance particles are indistinguishable. Great, that has yielded immeasurable results. Now let's fully flesh out at least one model that assumes they are distinguishable and hammer results out of it until they meet experimental predictions, too. We will likely find that there are some situations it can describe much more richly than our current models. And, that way people aren't going around forgetting it was an assumption to begin with.
"Science requires a substantial amount of creativity. In fact, they are necessary. The nature of breakthroughs requisite thinking outside the box and breaking from convention."
And if you do not have adequate depth in philosophy, you may find yourself parsing this seemingly simple phrase in a highly misinformative manner - as just one example: are you measuring things on a relative scale or an absolute scale? Also: are you actually measuring things, or are you....doing something much more interesting, that may not be obvious (and thus not taken into consideration)?
> What I would say are real limits to creativity are the arbitrary and absurd evaluation metrics we use.
Indeed!
> The need to innovate also prevents replication, which is the basis of science!
And also a substantial, rarely discussed drag on progress, across several dimensions!
> It is far more difficult, often nearly impossible, to get funding or even approval to replicate works.
That which is not studied, is unlikely to be understood, and this lack of understanding may register as NULL (as in: it may not be on one's radar, thus it "is" "nonexistent", as fans of science will assure you passionately, "because The Science says"). That science actually says the opposite is of no help - be it religion or science, once the Normie mind is captured, it's a goner.
Gregory (Scotland Yard detective): Is there any other point to which you would wish to draw my attention?
Holmes: To the curious incident of the dog in the night-time.
Gregory: The dog did nothing in the night-time.
Holmes: That was the curious incident.
-------------------
> The pursuit of novelty is also not only fundamentally anti-scientific but the basis of an existential crisis.
An ideological simulation, but also very true for science as it is imho.
> Science is the process of ruling things out, not confirming things.
There's also a substantial streetlight effect going on in science. Of course, such things are unavoidable to some degree, but not the degree to which they currently are in science, and there is also the issue of whether this is realized and disclosed, regularly and without aversion (as opposed to only when one gets caught telling some tall tales about science's abilities and on the ground performance, much of which is unknown and unknowable....maybe this is why science types tend to mock &/or hate philosophy and the skilful practice of it so much).
> But by nature, science is anti-authority and highly creative.
Including to it's own beliefs (including invisible metaphysical axioms) and authority?
Science, comprehensively, is what it is - and, what that is, precisely, is unknown. <--- I propose that this is not knowable to most scientific thinkers, including many(!) highly competent scientists.
Typically, the mind won't even allow the proposition to be contemplated, often only heuristic processing (guess according to one's sub-perceptual biases + post-hoc-rationalize, or downvote/delete-via-moderation/throttle) is available....all of which has very real consequences (though, we cannot measure them, thus they do not exist, so we are told and trained to think). Gee, what could go wrong with this approach!! (Possibly related: wow, there sure seems to be a lot of suboptimality in the world....could these two things maybe be related!!???? lol)
> You _must_ challenge pre-existing notions (trust but verify) and you must think outside the box.
Actually, you do not have to - only pretending* to do this is also very effective, and common.
I can only compare physics and computer science, because these are the only domains I have a formal education in. But I'll tell you that in physics, philosophy is well discussed and at the core. The study is called "natural philosophy" after all. There's a reason there isn't a shortage of philosopher mathematicians and philosopher physicists. You can't do those subjects without philosophy, but you can do philosophy without those subjects and you definitely lose some rigor and to be blunt I feel that the academics I meet there appear to have just toked a large bowl. There is a reason the mathematicians broke away from philosophers fairly early on, because there was a goal of finding truth rather than endless debate, which is based on charisma and linguistic flexibility. The goalpost is too easy to move as people continue to act smug weaseling their way into the contradictory provable evidence to their claims actually being proof of their own (because they're not arguing for truth but rather for intellectual superiority). Because of this, I'd be careful with your diction if you want to actually communicate. I'm sure you know of Hacking, but that's a great place to start for anyone unaware. But the point of science is that which Asimov explained: The Relativity of Wrong. We must not ignore the fact that we get more right over time, and not pretend that all wrongs are equally wrong. That just prevents progress (and why many philosophers come off as annoying. They try to poke holes without understanding the fabric they are prodding, often pointing at the well known holes as if they were invisible). Not to say that physicists aren't pretentious, they are and I'm not sure you could get many to even pursue the subject if you didn't promise them a fictitious pedestal in intellectual superiority ;) (It's why you look for those who's eyes light up when they talk). On the other hand, I find computer scientists to be lacking in philosophy and especially in a discussion of ethics. They have a high focus on empirical evidence but despite discussing it do not seem to internalize the tyranny of metrics, but rather place it even higher on a pedestal. Maybe it is that they often are more jack of all trades, master of none, but think that they are because they did a project that took a week or two in a domain. But if I don't use spacing who's going to read my text anyways?
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
> Could you please stop posting unsubstantive comments
Unsubstantive: not substantial; having no foundation in fact; fanciful
Imagine if you (and others) had to compete on a platform where you can't BRUTALLY misuse words that have specific, broadly accepted meanings, a platform where instead of powerful people (that's you) using language tricks like the one you are using here against the powerless with no repercussions, the tables were turned (as in: this, and many similar behaviors, have been directly targeted in the design as fundamental Human/Reality problems to be solved, once and for all).
Impossible? Query Paul Graham on whether he thinks this is impossible. Paul is a clever man, and I know it.
I've heard it before Dang. Those rules are written (not necessarily with intent, you must also grapple with emergence) such that it is ALWAYS possible for a way to be found that the rules "have been violated", according to a subjective(!) judgment.
When we write software, or do anything, as Hackers we have to HANDLE subjectivity, and complexity. This is a HACKER website, hard problems are what people here (including your boss) PROCLAIM that we SHOULD handle.
I appreciate that moderating this website is not easy, in no small part because of people like me, but I CHALLENGE you, or Paul (I encourage you to cc him on this, I will take on all challengers) - anyone who "wins" based on an AMBIGUOUS (did you notice you didn't note WHICH (objectively subjective) rule I allegedly violated? I did!) appeal to "the rules" is imho violating the very spirit of NUMEROUS of paulg's excellent blog posts.
What's the deal Dang? Can you force your mind out of ~standard moderation mode? Can you consider the possibility that you are not an Omniscient Oracle, that perhaps someone that holds some different beliefs than you might maybe have a valid point, that maybe you have a MASSIVE problem on your plate that maybe cannot be executed with perfection? When you and I disagree, is it possible that perhaps your subjective disagreement on the subjective matter (which this is - do you disagree?) is not necessarily the most correct answer?
What's you and paulg's goal here? Money? Winning petty arguments? Something else?
I enjoy playing the Climate Change card. Is it not true that it is asserted to be a ~"big deal" here on HN? Well, how well are you Big Brains doing on that front? I've read LOTS of conversations on that topic here on HN, and I see little novelty, and LOTS of rehashed mainstream (non)thinking. Is this the legacy you and paulg want to leave? "Think outside the box"...but don't you dare violate the (local, man-made) Overton Window.
A challenge: set Paul Graham on me: Me vs Him. Off the record. I predict if the "The Rules" advantage is taken away, I will win every argument (because I have at least one trick up my sleeve), in Fact if not in appearance (I have more than one way to handle short term "losses" due to ~loaded dice).
Alternatively, you can always fall back to the winning by fiat approach: I make the rules, thus I win. But what if that Cultural Norm (it is a norm, do you even know this?) is what is causing the problem?
Essentially, I am laying a challenge not just at your feet, but also at Paul Graham's feet: come out from behind your rules, and argue like you are actually serious, not just (silicon valley, Western, 2023, etc) culturally "serious".
Also: feel free to ban me, but if that's your choice I recommend you also delete this thread, as there is a "fairly excellent" chance that I am going to use it as a ~"case in point" once I get "some things" off the ground. I propose that am not your average forum poster - I have a non-trivial architecture in mind that has been in the works for years to address the games played on social media in this era of humanity's evolution, and being a human, I hold grudges. I dare you and Paul, or anyone, to underestimate me.
Or: we could violate 2023 Human norms and cooperate (or, even consider it). But then, that's not easy, and violates the 2023, Capitalist Alpha Male Overton Window of behavior, so "not exactly practical", contrary to paulg's MANY musings.
It's a bit of a pickle, eh? But only if you think...and even then, only maybe.
It's my job to stay in "standard moderation mode", i.e. to try my best to apply the rules even-handedly. Of course it's not perfect, but I can tell you for sure that I didn't reply out of disagreeing with your beliefs, because I have no awareness of what your beliefs are. I don't scan the comments for that—only for whether people are breaking the site guidelines or not, such as by posting flamewar comments.
Not sure why you're bringing up pg - he hasn't had anything to do with running HN in almost 10 years.
I'm not sure I've understood the rest of what you're saying here. It sounds assertive and even menacing ("I hold grudges"), which is a little scary, but it's not clear to me what you're asking for. Are you wanting me to stop moderating based on HN's rules and start arguing with you about whatever beliefs we don't happen to share? That's neither my job nor my preference. People often want to get into arguments with moderators about the underlying issues, but moderation isn't about that, and I don't experience myself as disagreeing with you about underlying issues in any way—for all I know, we agree! I'm just not tracking the threads for that. It's too much to keep track of, and it's better if we don't.
There are plenty of published papers which look absolutely foolish (crazy assumptions, bad design, bad analysis, all sorts of things...). Given that every grad student is supposed to produce a few publications, there is no shortage of them.
What does limit creativity is having science a full-time job, as this does come out with some expectation of results. This is especially true for people pursuing academic carrier - a grad student might spend two years on failing projects and then shrug and leave for industry; but someone shooting for a tenured position better show successes.
Unfortunately I don't know how to fix this. If one is does not have to work for some reason, or at least has a high-paying job which leaves plenty of free time, they can do some research.. but that does not scale. And plenty of hard science research requires equipment which is expensive, dangerous, and large.
> What does limit creativity is having science a full-time job, AS THIS DOES COME OUT WITH SOME EXPECTATION OF RESULTS.
You're getting to my point. The thing about any research is that it is a lot of failure before success. Which makes progress hard to impossible to measure essentially until success is made.
Butt eh expectation of results is my problem. The tyranny of metrics. The fuzzier a goal is, the fuzzier the metric's guidance is. All metrics are guides btw. An academic's quality is extremely difficult to measure and citation, h-index, i10, and others do a shit job. The only way is to be subjective, because honestly only a researcher's peers know if they are doing good work or not, and that's even unreliable. I mean the job is literally to explore unknown areas. I'm not sure why anyone expects constant and big results. If you're constantly finding things you either hit a gold mine or you are staying pretty local and not doing much exploring.
> I'm not sure I'd put geology on the same level as say physics
Physics can explain the geology, just like geology can explain biology.
> I think one of the biggest leaps our species will make will happen when we reach post scarcity.
I can see why some people consider google a Manhattan project, when you consider the oversight of us humans it has online, there is no privacy with browser fingerprinting, so it can flag up people to watch and then make their same discoveries.
However if people built and ran their own search engine, with the same oversight google and microsoft (via their OS) have carved out for themselves, then you might just have an edge or be at the front of the pack in terms of discoveries. How do you achieve that? Well countries with oversight of their telecoms networks could all do it by legal force, or find some other form of collaboration, forced or otherwise like universities, or company R&D, and thats the trick, there's lots of ways to learn.
>I hereby invite every curious human to do science and post it on the internet.
I wonder if an amateur hydrography could be accessible to people who spend a lot of time on the water? I'm a sailor and I've flirted with the idea of sending down off-the-shelf camera drones to shipwrecks that aren't regularly dived for example, I wonder if sonar imaging is something that's DIYable? I bet even just recording regular depth measurements as you sail along might be useful if enough people were doing it.
Sorry I totally missed this over the weekend. I'd definitely be up for that, I'm hoping to be living aboard next year so will have plenty of time to explore some ideas. What general area are you based in? I'm in the UK, will probably be on the south coast by the time I'd be ready to start trying things out.
> There's still some real science out there if you want it. :)
I don't think this is a correct way of representing things.
Science is a process, and it does not depend on finding out something someone never saw before or bothered to look at it.
Scientific discovery is also not necessarily tied to the science. Someone can literally stumble upon a fossil and proceed to make it known to the public, but that won't make that person a scientist.
What you actually described is that scientific discoveries can still be made by amateur scientists, and that science is not an exclusive domain of academia and professional researchers.
That doesn't say much of interest though. In humanity's time scale, science has always been the domain of hobbyists. One of humanity's most recent breakthrough has basically born from amateur social clubs where people got together to show the things they hacked together.
Of course professional scientists with lab budgets have more freedom.
Measuring things precisely is way cheaper than it's ever been. You can weigh micrograms without too much effort.[1] You can buy time/frequency standards for $200 or so that time to the national standards.[2] Float glass is a cheap and relatively flat surface plate that you can buy almost anywhere. Lasers, detectors, computing, everything is dirt cheap compared to the past.
If you have something you want to make, or measure, this is likely the easiest time ever to do so.
Don't worry about being at the leading edge, just do something, have fun, and you might be the first to notice some weird little quirk that turns out to be important.
You can buy an "economy" set of NIST traceable gage blocks for about $120[1], that are accurate to 50 millionth's of an inch. Metric sets are also available in the same price range.
Fluke has an entire line of NIST traceable tools[2]
For a stiff fee, you can ship things to NIST to have them calibrated[3], or buy standard samples.
In the US, NIST traceable calibration can be had for almost anything. I suspect the same is true (calibration back to a national standard) in most of the world.
You can get GPS disciplined clocks for about $200, the long term stability of those are tied to UTC, which is as good as it gets.
If you can afford to spend more, there are devices that can input a laser, and tell you the frequency of that light, down to 1 Hz resolution. They use optical combs, and a lot of physics, and have an internal atomic clock reference.
---
My point is that we have an amazingly affordable web of precision that we can fall back on, this allows people spread across the globe to build interchangeable parts that will fit properly on a regular basis. It is freaking amazing.
Soft sciences are easy to look down on because science lacks the systemic tools required to understand systems as a whole. The scientific method is a systematic, step by step process that examines components of reality: reality which is made of interconnected systems.
Both systemic and systematic approaches are required to understand reality [1].
The lack of a systemic method results in a "soft" science that is complex, ambiguous, and inconsistent. The fields within it are in the wild west, looking for a direction.
Humans have a habit of looking at something unfamiliar and thinking to themselves," I don't understand it, therefore, it is dumb."
Ambiguity can be difficult to handle. Many people take comfort in achieving an objective truth.
[1]"Systems Engineering: A systemic and systematic methodology for solving complex problems." Joseph Kasser, 2019. p. 17
I respect the soft sciences. They ask questions to which we need answers. They may sometimes fall short or lack rigor but it's not like we have an alternative way to address those issues.
At the same time I am disturbed by a soft scientist recommending basement chemistry as a remedy for publication bias. YouTube has already inspired a generation of basement chemists making bromine tubes or dipping things in hydrofluoric acid.
It is difficult and expensive to safely set up a residential space for chemical experimentation.
It is also almost impossible to know everything required without already having become both an academic and having been involved in the construction of a university lab, at which point you no longer need to set up in your basement.
Where does your drain go? Where will you store chemicals you can't dump in the drain? What's the drain pipe made of? Will your air conditioning overwhelm your ventilation hood? Could your return duct spill fumes into the main living space? Is all your electrical and ventilation equipment explosion proof? Deionized water, PPE, emergency shower, cleaning protocols... the list just goes on and the requirements are written in blood.
Even in a fully equipped space with a trained staff chemistry is inherently risky. It really shouldn't be performed in a building where people have beds.
Botany, entomology, astronomy, geology, sure. All these fields just need more eyes on them. Perhaps biochemistry if you are rigorous about never using volatile reagents, sure. But we don't have much real use for basement chemists screwing around with basement chemistry.
Yeah, but in social psychology you need human subjects ethics approval. What’s the plan for that? No oversight, a public IRB, etc?
Admittedly, is a huge impediment to scientific research—you have to jump through so many hoops before you are allowed to talk to people. It’s kind of crazy.
Turns out it's harder to break into a field when failing means breaking the half million dollar piece of equipment until the vendor can ship out a replacement part!
Even though I disagree with the point of your parent comment, their argument can also be made for mathematics vs social sciences, and those don't differ on how much budget they require.
At least in theory, I don't really think this distinction makes much sense. As but the most obvious example, a number of Einstein's most fundamental papers (including special relativity, and mass-energy equivalence) were published as an "amateur", while he was working as a low level patent inspector. And this is from a time when knowledge was quite difficult to access, meaning there really was quite a meaningful distinction between "amateur" and "professional."
Now a days that distinction is largely gone. One can access effectively infinite information online, including access to educational resources from the most well regarded institutions that exist. One benefit of "amateur" science is that one can endlessly chase exceptionally improbable ideas. Publish or perish, bias against negative results, grants gained as a performance metric, and other factors mean the science that the overwhelming majority of academic researchers, let alone corporate, can pursue is quite limited.
Doing something professionally just means doing it for money and with the authorization of others, not necessarily doing it well. We all know about professional software developers who are forced by their profession, that is, the fact that they do it under contract X for company Y managed by person Z, to do things that they know are complete crap. I mean, SAP is professional. Microsoft or Google lock-in? Highly professional! Shiny even. Supporting RSS? Not professional. A gimmick you're allowed, like what you do outside of work. Or even more brutal: Ignaz Semmelweis. That he was a professional didn't help him, the super mega cereal professionals undid his efforts without blinking, and without anyone even realizing it at the time.
At the same time, people just make stuff up without any rigor, just take climate change denial and whatnot. So it's not that not being a professional is any positive indicator, either. I just think that if you do something others don't already do, they cannot authorize you, by definition. They don't know what they're even talking about, while thinking they do. Not their fault, not your fault, but if you don't do you, "you" will not be done, and the world will never know.
Of course, in the end the results have to hold up, my point is not to give license to "questioning everything" as code for just spouting non-sequiturs. But I wouldn't dissuade "randoms" from pursuing scientific fancies because the professionals got it, because frankly, they don't. No single person or group or class does.
The main point of the essay is that it takes all sorts, and that you shouldn't put your eggs into one basket. IMO "professionals" is one single basket, the way everybody and their dog is streamlined and appropriate these days.
last but not least:
> However these 3 years of work in isolation, when I was thrown onto my own resources, following guidelines which I myself had spontaneously invented, instilled in me a strong degree of confidence, unassuming yet enduring, in my ability to do mathematics, which owes nothing to any consensus or to the fashions which pass as law.
[..]
> By this I mean to say: to reach out in my own way to the things I wished to learn, rather than relying on the notions of the consensus, overt or tacit, coming from a more or less extended clan of which I found myself a member, or which for any other reason laid claim to be taken as an authority. This silent consensus had informed me, both at the lyé and at the university, that one shouldn't bother worrying about what was really meant when using a term like "volume", which was "obviously self-evident", "generally known", "unproblematic", etc. I'd gone over their heads, almost as a matter of course, even as Lesbesgue himself had, several decades before, gone over their heads. It is in this gesture of "going beyond", to be something in oneself rather than the pawn of a consensus, the refusal to stay within a rigid circle that others have drawn around one - it is in this solitary act that one finds true creativity. All other things follow as a matter of course.
-- Alexandre Grothendieck, "The Life of a Mathematician - Reflections and Bearing Witness" (1986)
In many ways, I like the spirit of this post. I've tried a bit of it myself, since I'm interested in questions that aren't easily packaged for a grant (in the current era/error).
BUT:
"And second, for the first time in human history, the tools of science are cheap, and knowledge is nearly free. Your laptop can store and analyze more data than Galileo could have even imagined."
A big chunk of the problem in biology is with the data being collected. Garbage in, garbage out. If you aren't running real experiments, not just pushing bytes around, you are unlikely to find something truly novel. (I say this as a computational biologist who pushes lots of bytes around.) Good luck affording modern tools and data collection on your own dime. The bar for new discoveries is simply higher than the past.
ARPA-H (DARPA for healthcare) was recently established, which will be interesting to watch. However, I'm pessimistic that anything will significantly change until the current system crashes and burns under its own weight. Nevertheless, incremental progress is still happening in biology even if it lacks elegance and involves a lot of missteps. Some of it even makes it to the real world (e.g., current trends in cancer survival rates).
"The bar for new discoveries is simply higher than the past."
Likely, but there is this point of reconfirming findings. There are so many bogus papers - if at least some of them can be exposed by "amateurs", that would be a real win.
I'm curious what your thoughts are on iNaturalist, as a computational biologist. I'm a frequent user of that platform and I often wonder about the biases involved with user-generated content like that (people probably have a higher propensity to upload observations of flowering plants than they do of ferns, or bacteria, etc.).
So I guess I wonder: is it a common resource for a computational biologist (or similar), or is it just considered a source of mostly garbage data?
iNaturalist sightings are fed into gbif.org and obis.org, the two major databases of biological sightings that collate everything.
From my experience in my niche, iNaturalist sightings generally agree with the 'professional' sightings in the same database (various scientific expeditions' sightings data). It's definitely valuable data!
Of course, it has some interesting biases, for starters, small species and fast species are generally not submitted because it's hard to take good photos. But scientific expeditions also have their own biases; if they use underwater nets, for example, they'll also select for larger animals.
See the sister comment by a_bonobo for a better insight than I have, since I'm primarily focused on biomedicine rather than ecology. My guess is that ecology (which I was peripherally involved in grad school) is one of the areas where amateurs could have the biggest impact, largely because the field is so woefully underfunded and fieldwork requires more dedication and time than money, other than living expenses! Collaborating on an existing academic project might be a great way to get started, both for training and learning about the open questions in the field.
Lots of interesting comments reminding me not to side 100% with something just because it was said by someone whose work I love.
I have a degree in Applied Psychology so I have some idea about how stuff works, although perhaps not as much as someone who's done a PhD.
Mastroianni didn't specify exactly what you should be investigating. Anything that involves hurting others is out of the question, be it medical research or psychological.
But I guess there is literally no harm in observing the growth of the plants in your garden, the behaviour of the pigeons that visit your balcony every day, in conducting online surveys to guage attitudes and opinions (as long as the surveys are well-designed, a skill that can be learned online), putting a dead butterfly under a microscope and then sharing what you learned with the world.
We can share such "findings" and observations without claiming that our research is on par with those done by professionals. As long as the ones sharing these notes and the ones reading them understand that amateur exploration of an idea isn't a replacement for rigourous research, I see no harm in more people taking notice of the world around them and bringing other people's attention to it. Perhaps one of these amateur observations might be read by a professional, inspiring them to investigate it properly.
As for me, I'm thinking of publishing the findings from my dissertation on my blog. I'm not interested in conducting research, but I do enjoy reading research papers and explaining their findings. I guess that counts too.
In my masters degree I took a course where the entirety of the grade was in replicating a machine learning paper. It’s wasn’t a well-known one, but it still had many citations.
Well it turns out that the paper leaves out many key details in how it would actually be replicated, and the GitHub link was long since deleted (and not archived either). After many good faith efforts to reproduce the results we realized that the published result was actually probably due to the authors’ choice of a lucky random seed. Or more likely, rerunning the experiment many times with different seeds. In reinforcement learning that’s not uncommon.
Even if I am wrong, I still believe that a part of peer review should be in a third party being able to replicate your results, at least in computer science. It’s expensive but imo would lend a lot more credibility to results.
I would rather normalize publishing negative results (including refutations of previously published positive results).
Reproducing every single submission is extremely expensive, but per the Pareto principle, no one is ever going to care about the vast majority of publications. It's the ones that lots of people care about that we want to be accurate. Given the right incentives and possibilities, people will seek out to reproduce those by themselves.
Regarding how slow academia is adopting open science and other practices that would help fixing the disastrous state of scientific publishing, I am starting to think that forming societies competing with academia has a strong potential to make things change. You can't play with new rules inside a system governed with old rules. I have some skepticism with the approach described in this article, but I applaud the initiative, we need to form strongly cohesive groups that do science with their own term and see what comes up in the end.
This is an incredibly important idea. The scientific process has been corrupted by far too much pseudoscience and not near enough actual science. If the data say you are wrong, you are wrong. That doesn't mean finagle the data until you get the result you want, perform endless subgroup analyses, etc. If a theory is wrong, throw it out and start again. Quit salvaging the corpse of broken theory.
Did you ever do academic research in one of the softer disciplines? Because it sounds like you didn't.
I'm pretty critical of psychology (worked in neuro/cogsci for about 18 years). I reckon 95% of its output is wrong. But that's not because of pseudo-science. All theories in psychology are wrong. We've got no idea how the brain/mind works. All we can do is try to make sense of the experimental data that's available, come up with an idea that appears slightly better or has a different angle, contrast it with competing ideas in the same area, and keep defending it until it's utterly destroyed. If you would follow your rather rigid process, theorizing, and consequently experimenting, would stop overnight. It's not the way out of the swamp.
That also means that we should not use psychological theory as evidence for anything, and certainly not for policy making. Some of the applied research can be used though, if it's been established properly and repeatedly (another weakness in much of the social sciences).
I’ve read some Abstract conclusions in the last couple years that are so blatantly trying to fit their data result into some preconceived scheme from their hypothesis it’s laughable.
There's nobody to talk to. People used to write long letters to each other, consider them deeply, and reply thoroughly.
If you could find a small band of people to talk to, and force all communication to be through hand written notes, I think you'd have something.
As it is, nobody wants to think deeply about anyone else's problem - they want a quick answer that aligns with the correct one that's in the book.
Finding the exact group of people who are amateurs at a similar level who can grow in some subject together sounds great, but I think we'll find this when we get an AI that you can get good answers out of and can send it clippings from a book or newspaper and it will correctly understand the same way a good teacher would.
>Finding the exact group of people who are amateurs at a similar level who can grow in some subject together sounds great, but I think we'll find this when we get an AI
This could take another 50 years.
I hope that they are successful and that they will grow into a social network for science. It's almost surprising that there is not already something like BookWyrm for subjects and skill levels.
The long letters have become blog posts, YouTube videos, and public social media conversations. Influencer content is usually just infotainment, but scrolling past it can yield some real gems.
I founded a start up that has run 10 missions to space - with an art background. It took the willingness to be most ignorant person in the room, and to be okay with that. We've been able to make it work financially because we engage learners of all ages (K to grey!) around the world that all chip in to make the numbers work.
ExoLab-11 is scheduled for 2024. What might we discover together? What should be investigated next? https://magnitude.io
I believe amateurs can definitely contribute to science because advancements in knowledge often come from remixing existing ideas or incrementally improving something. I contend that the guy who built obstacle courses for squirrels in his backyard and shared it could be advancing science by inspiring a biologist to explore how animals solve problems.
There IS a certain amount of creativity required in science because you have to be able to form a hypothesis. A hypothesis is nothing but an idea - and the best ideas tend to come from people who are exposed to a lot of things that they can draw from.
It can be a little disheartening to discover some really obscure thing and then discover that Gauss beat you to it 170 years ago, over and over again.
The next problem you get is that increasingly discovery isn't the limiting factor, being able to usefully communicate about a discovery is. You might find all manner of useful stuff, but good luck getting it in front of someone who needs to know it. At least you won't have the weird academic pressures to obfscuate your work to make it sound more complicated when you publish about it, IF you can find a useful way to publish about it.
I’m a big fan of this approach. Even funded (small donation) one study that was started as a single person studying self, then he replicated on relatives, then online community, and then decided to get a defendable study done and raised funds from the community. It’s truly amazing we can do this. We don’t need to wait for some bureaucrat to approve a study or a donor to give to feel-good cause. It’s like a direct democracy of science funding :)
oh man, another one of these "science/peer review is broken" type articles. without peer review, you have the equivalent of whatever is popular on twitter or other social media. Or total crap instead of possible crap.... Peer review is far from perfect but the alternative is even worse, imho. I think this is especially so in the hard sciences, but i will concede the point to the author that peer review is not as useful for social sciences as being a firewall against bad/wrong results.
Considering the replication crisis and the revelation that multiple schools have allowed for bogus data to slip through in numerous studies: I don’t see how you can pretend this would be any different than what we have now.
I think scientists can be very helpful in this endeavour by suggesting things that interested amateurs can do. Of course, this doesn't mean they shouldn't do anything else!
One example: As a hobby astronomer, i can help advance science by taking measurements. I've found information about this topic to be somewhat difficult to find. There is a telescope which makes it very easy (the Unistellar), but it appears overpriced to me (4500€ for a telescope with a small 114mm mirror!).
One of the biggest issues we have to solve is the indexing and verification of knowledge. This is where I hope AI and ML will go, but being able to say whether the data is unique or derived from another source and being able to catalog it is something that is entirely lacking.
Right now there is just too much shit out there and it’s difficult to know what the context is for it.
I slightly disagree - it's the verification and integration of knowledge. The integration part - that is, reading papers and assessing them compared to all the others you've read, is quite hard, but it performs the useful function of creating a weighted average in your mind of what is real, and where the boundary is. This is part of graduate training, but it's basically what review papers do, so there is a time lag built in. AI for this would be huge, and it seems like it's not far off. And I want it to cite sources and point me to the real references, please.
While I totally get where you are coming from, I don’t think integration is necessary in 99% of usage. Sure, it helps people build intuition in the field they are specializing in, and for those people it can be invaluable, but having a better way to recall things would solve most use cases.
IMHO the biggest disservice to students in much of the education system in the US (including some colleges and universities ) is the focus on memorization vs focusing on problem solving. The older I get, the more I reflect on how much opportunity was wasted with poor quality teachers who just wanted us to memorize facts and be able to reproduce those facts in tests.
I don’t need to memorize maxwells equations to know that they exist, what they focus on, or how they can be applied to certain problem sets.
>Most of the professional science we produce right now is bad. Seriously, pick a paper at random and see how well it holds up.
This is utter BS. Occasionally, a high-profile fraudster is found out, but the overwhelming majority of papers in most disciplines will hold up very well under scrutiny (not that there couldn't be more scrutiny!). Scientists are humans though, and a certain percentage are scumbags. Given how scum rises to the top in private enterprise, I'm actually surprised we don't see more fraud from top administrators at ivy league colleges.
Academia has always had to strike a balance between embracing new ideas and resisting quacks. It's not news that revolutionary ideas sometimes have a tough time gaining acceptance. Science provides a framework that makes accepting ideas that work almost inevitable, but sometimes it takes time.
Can millions of backyard scientists really move things forward? Only if they're actually scientists in the sense that they keep careful notes and can reproduce their findings. If you find something weird and can't reproduce it, why should anyone take your word for it? Science has to be replicable. You can't have some secret process that only works in your backyard. You need to be able to write down how you produced your result in a way that someone else can read it and then replicate it in their own backyard. Only then can it become useful.
Unfortunately, for some areas of science, there are certain requirements that are impossible to meet in your backyard. You can't experiment with creating anti-hydrogen in your backyard because you need a massive particle accelerator for that kind of work.
What is most direly needed is better lines of communication between academia and the public. Currently, most university laboratories' communication with the public is carefully managed by PR departments. You don't just talk to anyone you like about your latest work. The PR people must be involved, lest you say something embarrassing or fail to sufficiently puff up the university's image. This needs to end. Scientists are sometimes awkward communicators. Certain words sometimes mean very different things to them. We need to get the public used to letting the occasional gaff slide and we need to get scientists out of the PR hustle sideshow.
We need scientists talking directly to the public, explaining what they're doing, and listening to screwball ideas from the peanut gallery. It lets the public know what's being done with their tax dollars and gives amateur scientists a chance to get up to speed on cutting edge research. It also bombards scientists with ideas. Even bad ones can be valuable prods in novel directions.
We don't need people saying that academia has failed and it's time for unpaid enthusiasts to pick up the mantle of progress, because that's not true and not how we move forward. We need to bridge the communications gap and unite the creativity of amateurs with the resources of professionals.
> the overwhelming majority of papers in most disciplines will hold up very well under scrutiny
I guess you didn't get the memo about the replication crisis?
But even leaving that aside, no, your statement here is not correct.
Even in "hard" sciences, many papers don't end up holding up under scrutiny. That's by design. The purpose of publishing a paper is not to document a result that is already known for sure to be right. It is to push the boundaries of current knowledge. Many such pushes don't pan out. We non-scientists don't see that because we never hear about those papers. We only hear about the ones that do pan out--or the ones that somebody hyped and marketed even though they hadn't panned out yet, and then got caught.
To take an example from a century ago: if you just look at Einstein's published papers on General Relativity, it looks pretty straightforward. He made a few tries all tending in the same direction and finally ended up with the right field equation, the one that we now know, a century later, to have extensive experimental confirmation. What you don't see is all the papers published by all the other physicists that were working on a relativistic theory of gravity--all of which were wrong and were dropped. But they were still published papers, because the standard of publication wasn't "this has to be right", but "this is worth considering to see if maybe it's right".
> We need scientists talking directly to the public, explaining what they're doing
We already have that. Scientists write books, do videos, have blogs. Unfortunately, scientists don't do a good job at communicating with the public, because they don't take care to distinguish what is actually established--by experimental confirmation and replication--from their own pet hypotheses that haven't yet been confirmed. (String theory in physics is a prime example: string theory makes no testable predictions at all, yet string theorists routinely talk about it as if it were proven fact.)
> and listening to screwball ideas from the peanut gallery.
No, we don't need that, because nobody from the "peanut gallery" (meaning people who are interested in science but don't have a good understanding of our best current theories) has ever, in the history of science, come up with an idea worth pursuing. Tales to the contrary are myths. For example, Einstein is often given as an example of a "peanut gallery" person who came up with a great theory; in fact he was nothing of the kind. He got a doctorate in physics right around the time he published his classic 1905 papers. He had a thorough understanding of the best current theories of the time in physics, and that was recognized by the top physicists of the time. He had close connections with top experimental physicists who kept him up to speed on the latest discoveries, and he worked with them to develop theoretical models for them (for example his work on Brownian motion and on specific heats). He was no "peanut gallery" person; he was a working scientist.
> No, we don't need that, because nobody from the "peanut gallery" (meaning people who are interested in science but don't have a good understanding of our best current theories) has ever, in the history of science, come up with an idea worth pursuing.
That's clearly not true (as you'd expect from such a broad absolute) and it's apparent in the countless scientists who were inspired to pursue research because of ideas taken from art or science fiction.
Even really fringe ideas about things like ghosts, telepathy, or remote viewing, have been considered ideas worth pursuing by scientists and all kinds of paranormal concepts have been widely studied as a result.
You might also count as peanut gallery types scientists whose interests were mainly in one area, but who came up with some amazing ideas in others. For example, Alfred Wegener was a meteorologist who had studied physics and astronomy but first got the idea for plate tectonics just by noticing how well continents seemed to fit like jigsaw puzzle pieces. When he eventually put the idea forward geologists rejected it.
> the countless scientists who were inspired to pursue research because of ideas taken from art or science fiction
Examples, please?
> Even really fringe ideas about things like ghosts, telepathy, or remote viewing, have been considered ideas worth pursuing by scientists and all kinds of paranormal concepts have been widely studied as a result.
And none of them have resulted in any actual science.
> When he eventually put the idea forward geologists rejected it.
Yes, because there was neither evidence that continents could move nor a known mechanism at the time for how continents could move. That's normal science. Things changed once enough evidence accumulated and once a feasible mechanism was proposed.
> And none of them have resulted in any actual science.
Not true. Many researchers around the globe have been involved in paranormal research. Remote viewing for example was studied at Stanford Research Institute as part of a government project (see https://en.wikipedia.org/wiki/Stargate_Project) and while the conclusions reached as a result of the research performed for that project and other similar efforts didn't confirm the existence or usefulness of psychic abilities, that doesn't mean that "actual science" wasn't done. Science is about getting us closer to the truth, and even a null result is a valuable contribution to that effort.
> Yes, because there was neither evidence that continents could move nor a known mechanism at the time for how continents could move. That's normal science.
It's also an example of someone from the peanut gallery having an idea worth looking into.
I don't think the expectation is that "Listening to screwball ideas from the public" means some rando will come up with extensive evidence or proofs that confirm a novel idea. I think it means that anyone with the curiosity and passion to think about a subject in depth can look at something and come up with a good idea even if they're not an expert in that area.
We can reject and ignore the ideas of non-experts and shame curious minds into silence, or we can embrace them and maybe even explore them or be inspired by them because if we do, we could discover something sooner than we might have otherwise.
Only one of these is any kind of potential scientific discovery: Szilard's hypothesis about the possibility of a nuclear chain reaction. However, the article vastly overstates the case: Szilard did not "solve" the problem of a nuclear chain reaction. He just guessed that a chain reaction involving neutrons might be possible if a suitable element could be found. It's also not clear how much of an inspiration Wells' novel actually was; in the novel, the "atomic power" is based on radioactivity, which was discovered a decade and a half before Wells published, and which does not produce any chain reactions. Szilard's guess was certainly inspired, no question about that, but he might well have had it without ever knowing about Well's novel; learning about the discovery of the neutron would have been enough.
The other items are just new technologies based on already known scientific principles; none of them involved any new scientific discoveries.
> even a null result is a valuable contribution to that effort.
If it's a new null result, sure. But anyone who understood what was already known about fundamental forces at the time would have known that there would be a null result without having to do the research.
> It's also an example of someone from the peanut gallery having an idea worth looking into.
Huh? Wegener was a credentialed scientist; he wasn't "from the peanut gallery".
> Only one of these is any kind of potential scientific discovery
You've moved the goalposts here quite a bit from "nobody from the [peanut gallery] has ever, in the history of science, come up with an idea worth pursuing" to "their ideas have never resulted in a new scientific discovery"
Stories of technologies that didn't exist and weren't at the time scientifically possible have inspired people to try to make fantasy reality and science has certainly progressed as a result.
> If it's a new null result, sure. But anyone who understood what was already known about fundamental forces at the time would have known that there would be a null result without having to do the research.
That isn't how science works. Why assume that physic abilities which seem to defy everything we know would be dependent on fundamental forces? I'm certain that physicists like Robert G. Jahn and other researchers were well aware of fundamental forces but still thought it was an idea worth pursuing, and of course many of the null results they got were new since the researchers were often doing pioneering research into topics that hadn't seen serious scientific study.
If the US government already had a mountain of prior scientific evidence showing that psychic abilities were non-existent or ineffective they wouldn't have poured such massive amounts of money into that research. What they did have was evidence that other countries (and the USSR in particular) were already doing this kind of research, but much of that was kept secret.
> Huh? Wegener was a credentialed scientist; he wasn't "from the peanut gallery".
I acknowledged that it might be stretching your definition as a scientist exploring a topic outside of his area of expertise.
> You've moved the goalposts here quite a bit from "nobody from the [peanut gallery] has ever, in the history of science, come up with an idea worth pursuing" to "their ideas have never resulted in a new scientific discovery"
I haven't moved the goalposts at all. "Worth pursuing" means "results in a new scientific discovery". That's what you are claiming: that listening to the peanut gallery will give us some new scientific discoveries. I'm simply pointing out that it never has up to now.
> Why assume that physic abilities which seem to defy everything we know would be dependent on fundamental forces?
Because everything is dependent on fundamental forces. That's why they're called fundamental. Again, read the Carroll article referenced elsewhere in this discussion.
> If the US government already had a mountain of prior scientific evidence showing that psychic abilities were non-existent or ineffective they wouldn't have poured such massive amounts of money into that research.
Bad example. The US government, like all governments, does lots of things that are stupid and guaranteed to fail.
> a scientist exploring a topic outside of his area of expertise.
It wasn't outside his area of expertise. Many scientists in the course of their work gain expertise outside the narrow area in which they are credentialed.
It is also not impossible for a person with no scientific credentials to become an expert in a scientific field. But if they do that, they are no longer a member of the "peanut gallery". They are a working scientist. A historical example is Michael Faraday, who never had any scientific credentials at all, but made himself an expert in electricity and magnetism by intense study and experimentation.
> We can reject and ignore the ideas of non-experts and shame curious minds into silence, or we can embrace them and maybe even explore them or be inspired by them because if we do, we could discover something sooner than we might have otherwise.
I don't share your optimism here, and I don't think we're going to reach agreement on this point.
Yeah, even some of the uninteresting results we got have been classified. I wouldn't be surprised if some research is still ongoing in secret. I remember reading about this: https://time.com/4721715/phenomena-annie-jacobsen but that seems like it's more about intuition and gut feelings than magic.
I love this. One of the ideas I've been noodling around in my head is that we can brute force our way through really difficult problems. There are a lot of fricking people on earth. The biggest companies doing research have what, a couple hundred thousand people?
Imagine a 10 million people attempting to solve a single problem. Or a 100 million. Or half a billion people! The one that pops into my mind is climate change. Imagine if we had millions of different experiments going on in parallel and millions of people tinkering on stuff to fix the world.
You inhabit a global economy that was imagined by students in dorm rooms and couch surfing launch failures.
Working people today have the wealth and leisure of every single lady and gentleman scientist who has ever made a significant contribution - and we have the sum of human knowledge at our fingertips. You have running water, flushing toilets, and food delivery. The only thing standing in the way is a commitment to curiosity and competence. The beauty of science is it is in the evidence of reproducable experiments. All models are wrong, most theories are bullshit, and the people charged with progressing science for our species learned it in a couple lectures a week over three years or less. You can do valuable and edifying work in less time than it took to fully apprehend the disappointment of the Game of Thrones finale if you'd only watched 3blue1brown videos instead.
Prestige is what they offer you when they don't want to pay you what you are worth, and their esteem isn't worth the envy and sabotage it costs. If you are curious about something, find out. The one thing that I can guarantee that is true about this world is that it forgives you when you prevail, and when you fail, nobody cares. I say this to myself as much as anyone.
"“Fall in love with some activity, and do it! Nobody ever figures out what life is all about, and it doesn't matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough. Work as hard and as much as you want to on the things you like to do the best. Don't think about what you want to be, but what you want to do. Keep up some kind of a minimum with other things so that society doesn't stop you from doing anything at all.”
― Richard P. Feynman "
Regardless of the source, these (and yours in the same vein) are some wonderful ideas.
Working people may have the wealth, but not the leisure. Working people are, by definition, working -- often during the most productive hours of the day. Scientific or artistic pursuits are usually in the off-hours: the early mornings, evenings and weekends, interrupted commuting, meal preparation, child care and housework. Lady and gentlemen scientists of the past were either wealthy or supported by patrons, and had at least eight more productive hours a day (on average) than today's hobbyist scientist.
True to some extent but middle class people in their 20s before having kids and after their kids are in high school have a ton of leisure time. It’s just usually spent spending money. Their entertainment dollars.
> the people charged with progressing science for our species learned it in a couple lectures a week over three years or less.
I beg to differ. Even for the undergrad level this feels naïve. There's much more learning going on outside the lecture and I'd wager that's where most of it is happening. For grad school this is definitely true. You learn the most when you stop taking classes and that period is generally associated with extremely unhealthy work life balances. I'm in my 5th year of grad school and I read probably a few papers in depth and a dozen or more at lighter levels every week. This is not including all the time I'm writing code, reading other people's implementations, or reading math books and blogs on the subjects. The people I know that research that have graduated also do a lot of similar things, just not at as high of a rate, and they have a healthier (but I'm not sure I'd call healthy) work life balance.
This inane kind of thinking only promotes the absurd armchair expertise we see where people think a few YouTube videos by communicators gives them sufficient knowledge to arrogantly argue with domain experts. I'll take pretentiousness over arrogance any day. The Internet is dominated with these armchair experts who group together in their bubbles and push out any who disagree with the "obvious" answers. They commonly criticize experts for not seeing something "obvious" which the experts actually accounted for but would require reading their actual publication rather than the telephone game that is science communicators.
No one is stopping you from getting domain expertise. No degree required. PhDs demonstrate self learning every day. But if you can't walk the walk then don't talk the talk. Shits a lot more complicated than you think. You're holding us back by trivializing things. It just encourages substaceless bickering between parties who are both severely misguided, instead of problem solving and working towards actual resolution.
Interesting divide. I prefer someone who is arrogant over someone who is pretentious because at least you know that while their arrogance may be ignorant and fatuous, at least it is sincere. You can fix arrogance with some sudden humility, but pretension defends itself to the death because it knows it is an impostor and it is fighting for its survival.
I have some sympathy for the beleaguered experts who respond to cranks, but if you are in the professing business, you don't always get to choose your audience. Who cares if someone criticizes an expert, criticism isn't science. Science means reproducible results or GTFO. The only question anyone needs to ask of their amateur interrogators is whether they are pursuing truth, or just acknowldegment and prestige. If it's the latter, you can just say, "go to school."
As a counterpoint, at least as far as ‘fundamental discoveries’ go, see the article by Sean Carol“ The Laws Underlying The Physics of Everyday Life Are Completely Understood”:
The fundamental stuff are all known, except for the ones you don’t encounter in everyday situations, which will as a consequence be harder for hobbyists to discover too.
The laws are well understood, but not universally, even among credentialed experts, either. And implications of them remain to be discovered. And on top of that, their predictive power is limited.
Is there any point in inventing something new investing your own time and money if cheap knockoffs can be produced overseas and imported with no consequence? Patents just aren't enforcable in the globalized world of today. Not to mention monopolies stifling potential competitors from arising.
I have relatives that have inventions but they live in Russia and basically you get nothing in return for patents. How difficult would it be for a foreign citizen to apply for a patent in, lets say, the USA?
> The simplest generator consists of just a coil of wire and a bar magnet. When you push the magnet through the middle of the coil, an electric current is produced in the wire.
Here is the crazy thought:
What if we took other materials and tried using it the same way as the generator? Does that produce another form of energy (not electric current) but something totally different which we can’t even detect? What’s to say that only electric current can be generated but some other forms of energy too?
What happens if we push a crystal through the coil? A rock which isn’t magnetic? A piece of wood? What effect does that have on atomic scale that we can’t even detect or measure?
You can't just say "other forms of energy". That's a massive hand-wave. The electromagnetic field can be detected and manipulated experimentally. It also has reasonable theories of explanation and mathematics to describe its function. The theories made predictions that we later verified with experiments (meaning the theories were not just post-hoc descriptions).
A magnet is not special; it is just a material where lots of the magnetic domains are aligned in the same direction such that the overall magnetic field doesn't cancel to zero at a distance. Not all molecules have much of a magnetic moment themselves either so even if they were to align that doesn't mean they'd exhibit much of an overall field. Water falls into this bucket. It takes super strong magnets and super sensitive detectors to make MRIs work precisely because water's ability to be magnetized is so insanely small.
Also crystal is an arrangement of solid matter, not a thing unto itself. It just means the electromagnetic charges and shape of the molecule make the atoms want to orient in 3d space into a regular pattern. Any molecule that self-organizes into a repeating geometric pattern as it transitions from liquid creates a crystal.
Fwiw, that's the mechanical basis of many rituals and sacraments that one day will be understood by science. For example, the eucharist ceremony is a carefully designed sequence to produce a certain uplifting current, vaguely perceptible in some cases, but not measurable today.
For example, we know that there are these weird neutrinos, but if we were to find a method of constraining them within a pipe, the produced current might have unusual properties. Pretty much any foo flowing within another type of foo will create a new type of current.
Or, your 'certain uplifting current' is just activation of a neural pleasure center, that humans are pre-disposed to as the result of 10's of generations of 'selection' where individuals who didn't enjoy the ritual in some way or another were not as successful at passing on their genes, either by lack of bonding with others in the ritual group, to down right exclusion from it.
This sort of materialistic prejudice is common today. Science is about sticking to proven facts and using strict logic when deriving new facts, but refusing to even consider new ideas isn't science, it's lowly prejudice and adherence to groupthink.
However, this post was written by a post-doc in social psychology, and is employed in the management division of a business school. I'm ready to believe that a rank amateur could walk into his field and do science just as well as anyone else, and this post makes a lot more sense.