Ramón Y Cajal was a contrarian when this was written, but he had great timing. In the late 19th century, it was fairly popular to believe that all the laws of physics had already been established—remaining progress would come from improvements in experimental methods. There's a famous "physics is over" quote misattributed to Lord Kelvin (actually said by Michelson, the guy who measured the speed of light).
A few years after this was written, Planck proposed energy quanta. And in 1905, Einstein published his four Annus Mirabilis papers, introducing the photoelectric effect (applying quantum), special relativity, and the mass-energy relationship.
As typical with contrarians, Ramón y Cajal said some things that held up well and others that didn't. In the same book "Advice for a Young Investigator" that this excerpt is from he also gave his view of theorists: "Basically a theorist is a lazy person masquerading as a diligent one because it is easier to fashion a theory than to discover a phenomenon"!
Well, it's true that we can't really tell how serious he was being. And it is worth remembering he was a neuroscientist who studied neurons. He was probably thinking of people who made complex theories about "how the brain works" without ever designing experiments to test them, not physics theorists.
Didn't we recently confirm gravitational waves by checking out a couple interacting black holes, originally theorized by Einstein 100 years ago? I think even Einstein would agree that it was much harder to discover it than to theorize it.
The experiment would never have been done without the theory, and a long line of experiments suggesting that Nature and the theory had a lot to do with one another, first. We knew that gravitational waves, or something very much like them, happened due to the binary pulsar work by Hulse and Taylor in 1974.
Einstein's advance was quite spectacular, and early.
Without the theory, how well do you think the funding submission for the experiment would have gone? Two laser interferometers a couple of kilometres long in specially built tunnels, with super sensitive custom equipment costing millions of dollars running for years just on the off chance? Sure, here’s the cheque!
So their method for finding the signal was to calculate the signal, then subtract it from the data and see if the residual noise looks like noise...how is that good science? It seems like it would be too easy to make your data fit the theory.
There is a cool documentary about how Einstein was trying to prove relativity theory, and it involved a lot of work, special telescope, traveling, war... I think it was done at third attempt.
Imagine if he would have died before proving it ...
My point is, he didn't stop in the theory, but also designed an experiment and worked a lot to get it proved.
Cajal's statement clearly doesn't apply to him.
I'll try to find the video, or if someone remembers please link it.
I would say that successful theorists are exactly those who discover phenomena. Saunders Mac Lane is perhaps the epitome in mathematics of someone who was guided by phenomena.
This is why category theory was not discovered, it was reverse engineered! The reverse engineering steps were:
3. Natural transformations
2. Functors
1. Categories
Edit: Of course, when he said theorist I think he meant people who don't experiment physically.
Yes, but that would be a more generic description. All of mathematics abstracts previously partially known knowledge.
When we went from 1 coconut -> the set {1}, then we were being really abstract for the times.
But I think your point is that category theory synthesises group theory, linear algrebra, topology, etc. into one concept, which was very much the spirit of the origins of category theory. However, Mac Lane and Eilenberg thought that their diagrams were just an aid to mathematics (much like a Venn diagram, Cayley diagram or a Feynman diagram). But when they realised that natural transformations are so ubiquitous and fundamental, then they realised that their graphs were not just a useful shorthand, but in fact would lead to a whole new type of mathematics. When people thought (not Mac Lane though) category theory was "abstract nonsense" they were making this mistake of thinking that the diagrams are illustrations rather than concrete mathematics.
In the same way, you might thing that {1,2,3} is just an illustration, but in fact it is a rigorous shorthand for a very specific set.
The real meat behind category theory are things like natural transformations and adjunctions. But to get to category theory from there, you do a kind of reverse engineering.
I would say theories and discoveries go hand in hand, or maybe leapfrog each other. They both travel in the direction of the fog at the edge of our understanding.
And like all such books, almost certainly it is wrong.
Backwards time travel or FTL are not measure of progress. Even with the field of "fundamental" physics:
1. There are lots of things we do not understand in cosmology (cosmological constant, nature of dark matter, matter/antimatter asymmetry, force unification at very high energy scales, gravity at high energies, etc). Each of those could potentially revolutionize our understanding of the universe
2. There are lots of things we do not understand at small scales (Casimir effect/vacuum energy relationship, plank scale effects, why the particle soup, gravity on very small scale, reason behind asymmetry in helicity/weak interaction and other parity/symmetry related effects, doing "useful" calculation with renormalization group, etc). Each of those could potentially revolutionize our understanding of the universe.
There is also a lot to be done in our understanding of computing (as in, nature of computation)
1. Computation related problems (Church-Turing thesis, novel algorithmics + computing platforms such as quantum computing). Is approximately correct/probabilistic computing a loophole for getting essentially/mostly correct results in P time for NP-hard problems? Nature of AGI/what enables sapience when doing computing.
Of course as we go into "less fundamental" sciences like chemistry/biology/etc then the amount to be learned is just overwhelming, we truly know very little.
Not a physicist so excuse the ignorance, but do we understand gravity at all?
I mean afaik we can observe and predict it's behaviors but do we understand what underlying force causes it, and if it potentially has a counter force.
Well, at sufficiently low energies and sufficiently large length scales we can say gravity is how space time curves under the influence of energy (principly from gravitational mass; another mystery, why gravitational and inertial masses proportional to each other). This picture is called general relativity and it works very well in a wide range of regimes (from motion of planets and galaxies to merger of black holes). However, there should be an exchange of force carrier description (quantums of graviton) at small enough scales or very large energy densities where we fail to make the math work (because we expect certain properties/symmetries from gravitons).
So the word "understand" is a bit loaded. GR is a certain understanding of how gravity works, but it is not a "quantum mechanical" understanding.
The difference between the problems physicists were tackling at the turn of the 20th century and the ones we're trying to tackle today is that we don't have the technology or the resources required to build the kind of experiments that would allow us to study these phenomena at a meaningful scale and test out our hypotheses.
But the same was 100,200,300 and so years ago. Thete were always some sort of limitations, mostly technological. I'm not saying that we'll build solar system size colliders any time soon, however 50-100 more years and we should be much much more sophisticated in terms of what we can do in space and back on the earth.
This is not clear to me at all. I don't think "bigger particle accelerators" is what we need, and lots of the questions I posted are not necessarily solvable by particle accelerators. We do need new experiments, but probably in a sense of "new ways to observe the universe" rather than "old ways to observe the universe scaled bigger."
I personally believe a lot of that stuff cannot be further studied unless we are able to divert solutions to other problems in our society first. I'm saying that we need to have things like mass quantity sustainable energy, significant automation, global unification and standards, higher minimum education levels.
I'm saying that imagine 50% of the population works in blue collar general labor or semi-skilled labor fields. Now in this hypothetical worlds, all those jobs are managed by autonomous robots. Also we have a green power that is sustainable, storable, sufficient for even double the population, and can be held in high densities at low volumes. So there are now innumerable sectors within the economy that we don't need people themselves to learn. That leaves more time for people to take extended amounts of time to learn and study. I mean quite literally a Star-Trek "post-scaricity world" in a lot of ways. People use time to further themselves and expend time on cultural or scientific endeavors. Life is no longer about struggle and survival since money clearly would have no value if any and everything can be made or consumed for free. I mean it's really interesting to think that the only "conflict" that would exist is between people trying to min-max life in terms of achievement. There would be no achievement in religion, money, or ownership since everybody can do it.
Ultimately what I'm saying is that a lot of our advances are contingent upon other sectors becoming automated and allow for more people to get into academic sectors.
The 50% of the population who are blue collar workers aren’t going to retrain as particle physicist or theoretical computer scientists one they get their UBI.
I mean we could hash out all hypothetical all day. I'm just saying that the needs of the economy would shift to that where a PhD basically becomes the new High School Diploma. In essence, entry level jobs would be stuff that today would legitimately require a masters or higher.
I'm not saying that 100% of that 50% will be employable in this world. I'm saying that over time that 50% will inevitably become that bare minimum. The way I see it is 150 years ago, your idea would be that we couldn't possibly get all children to become educated at an 8th grade level, yet here we are, even making an HSD the bare minimum.
Eventually your masters thesis will be an area for you to study and pursue to make an attempt at furthering society.
I am inclined to say the real problem is that smart people don’t have kids. But we’ll probably figure out how to make all babies intelligent before that becomes a problem. Incidentally, the problem you mention will also be solved in the same way.
genuine question - what do you mean by smart people? High IQ? Deep knowledge about mankind's place in the Universe / natural order? or their ability to maximise their own personal happiness over their lifetime? If the last point, then personal experience would suggest that having kids makes you happier person (once they are sleeping through the night)YMMV
Pretty much any study of the past century and any projection from reputable sources says that populations reach equilibrium and stop growing as the level of education, in particular education of women, rises.
This is a non-problem that has always taken care of itself in any developed country and we have no reason to believe it will not take care of itself in the developing world as well.
The UN for instance does not believe there will be 10 billion humans on earth ever (where "ever" means "as long as projections have any value").
I wouldn't quite call world population growth a non-problem, considering it is already a problem. The future predictions may well turn out correct, and have some solid reasoning behind them, but the numbers still look fairly crude to me. Plots of historic growth rates are bouncing up and down, while the prediction is they will suddenly turn a corner tomorrow and drop monotonically to zero and all will be peachy. How often has that happened? Considering the current record-breaking population numbers, the ultimate answer is it has never happened, at least not permanently. There's a lot of risk in betting on things to just work themselves out. Development can actually work against us in a sense, as new food production technology and cures allow faster-than-ever growth in new regions and populations.
Below replacement fertility is a recent phenomenon (last couple of generations). Evolution works in multiples of generations. We’re at the very beginning of evolving resistance to modernity.
Education, quality of life and equal opportunities for women. As these increase, the birth rate goes down. Most countries that score highly in these areas actually have below replacement birth rates and only grow in population due to immigration.
Why are time machines and Starship Enterprises a good standard of scientific progress? Why not immorality, sentient computers, and stuff like that, which is equally science-fictiony but possible?
"Immorality" is not only theoretically possible, it has been thoroughly mastered by many ;)
I think there's a noteworthy distinction between science and its applications. In my mind, science is about understanding the world, whereas fields like engineering/medicine are about their practical applications.
I do think that there's a tremendous amount of progress that could be made in sciences like Biology, Psychology etc. But I would draw a distinction between the things that fundamentally change the way we understand the world, vs building really cool toys that we would love to have.
I think the point of the "End of Science" argument is that any discoveries in biology (and physics) will merely be elaborations of basic principles that already exist, rather than elucidations of any as-yet undiscovered principles.
For example, CRISPR. Many people think that CRISPR was an amazing discovery, but really, it's just a biological system that has existed for a long, long time, where a collection of smart people realized that with some engineering it could be used for effective genetic modifications with high precision and no need for engineering custom proteins to bind specific sequences. That seems fundamentally different from, for example, the experiments that established that DNA is the molecule of heredity when nobody had an idea how DNA could encode information.
DNA as the molecule of encoding information for heredity is also "merely" a discovery of an ancient biological system. However, it's not as though physics predicts the existence of DNA specifically, or CRISPR, yet these things are important for understanding biology, and in the case of CRISPR it's been turned into a technology that humans can use. Which is why I have a lot of complaints about the commonly held belief such as this one:
> merely be elaborations of basic principles that already exist, rather than elucidations of any as-yet undiscovered principles.
This is not a meaningful or thoughtful examination of even chemistry. 3D structure of proteins is "merely" an elaboration of physical properties, yet "physics" doesn't have the tools to make much progress on solving the 3D structure of a sequence of amino acids, despite it being a purely physics process.
Is the world "physical" in the sense that probably don't have new fundamental forces of nature? Of course. That doesn't mean that physics helps understand much of the physical world, because the "elaboration" in the "merely elaboration" has nothing to do what physicists or other scientists consider "physics."
No, the elucidation of the structure of the DNA isn't just merely a discovery of an ancient biological system. It was the recognition that the structure was formed by antiparallel strands encoding information in a reversible molecular form, that represents a real level-up in human understanding of the universe. That's the whole point of that throwaway sentence at the end "It has not escaped our notice (12) that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material."
Also, we can simulate protein folding well enough from classical physics and quantum approximations such that "rapid two-state folders" are considered solved. That was a major outcome in the course of my career, to which I contributed significantly :)
If we can simulate protein folding well enough, why was the Google announcement last year such a big deal?
I worked in protein folding over 30 years ago at EMBL, and have loosely followed it since. I could easily have been led astray, but I was absolutely not under the impression that we can do this even close to "well enough".
I work at Google and did protein folding before the Deepmind (not Google) announcement.
The CASP results weren't really a big deal. It was a modest advancement using techniques that were already spreading throughout the community, coupled with a skilled team that understood the score metric very well.
Two state folders can be reversibly folded using empirically determined force fields (two state folders basically go from "any totally unfolded configuration" to "fully folded single structure" in milliseconds); we can just run simulations and let the (quantum-inspired, classically embedded) physics do the folding, or we can use other techniques, like Rosetta (monte carlo plus lots of empirical data from known structures), or evolutionary data-based techniques (like Deepmind and others used).
If I'm filling in the blanks here correctly, what we're still far away from is determining the folded configuration of an arbitrary polypeptide. Is that correct? Or has there been real breakthroughs there? 10 years ago when I last checked in with some folk I knew from EMBL, this still seemed to be a complete pipedream.
Is there a paper that describes the parameters of the peptide structure that go into the "physics do the folding" part? When I was at EMBL, I was focused on using local hydrophobicity to see how predictive it was (not at all). Is the physics model operating at this level, above it, or below it?
> But I would draw a distinction between the things that fundamentally change the way we understand the world, vs building really cool toys that we would love to have.
It may very well be that understanding emergent phenomenon at the appropriate level of emergence will turn out to be vitally important, and that reductionism (while undoubtedly useful in many scenarios) is impeding our understanding of emergent phenomena like consciousness and evolution.
Obviously, we are discussing immortality of non-human species.
However, we more or less understand that morality of larger lifeforms is encoded in our DNA (e.g. telemers). Mortality seems to be a defense against cancer.
There is no particular reason that a human needs to grow old, except for the accidents of evolution.
They do say “laughter is the best medicine”, and morality is obviously funnier than mortality. No mere telomere can tell me my mortality, as long as I’m defending myself by laughing at spelling errors.
Did I tell you about the time, in 7th grade biology class, the kid sitting next to me was asked to read something aloud from the textbook about ”organisms”, but of course she said “orgasms”. She might have died a little right there, whilst the rest of us got her energy recycled as a power-up.
Mortality is not a defence against cancer. Mortality is a tool that allows species to more speedily adapt to changing environment. Sexual reproduction is another such tool, which ensures enhanced diversity of future generations, so some of the descendants will adapt better and leave more better adapted descendants.
Btw, in this sense cancer is just another tool, that limits lifespan and ensures generations change. Of course, it didn't appear as such, but most species have no natural incentive to develop a resistance to it.
>"Btw, in this sense cancer is just another tool, that limits lifespan and ensures generations change. Of course, it didn't appear as such, but most species have no natural incentive to develop a resistance to it."
Not exactly and i believe there are a rare few that are much much more resistant to it and as such have become subject of research (trough TP53 in elephants or P16 and P27 in naked mole rats)
At the end of the day this natural incentive depends on when the cancer can appear (generally right away at every step of the cell cycles) and how likely it is which probably depends on the turnover and amount of cells of a particular type or in the being overall which one would assume increases as the being grows and additionally how likely it is to inhibit reproduction as it grows.
As it stands i'd say whilst they're not too inhibiting on this front (for example humans are most fertile at relatively young age well before most cancer occurrences become a problem. We've just extended our lifespan quite a bit) they still can be (kids can die of cancer too) and thus an evolutionary incentive against it however minor is present.
I just said time machines and faster than light travel are unlikely to ever occur; I do think life extension and AGI are not technically impossible, but rather inevitable (my training is in biology, and I work on machine learning).
For the first two, we'd need to have a radically different physics than the current model, while the last two, they seem like reasonable extrapolations from modern technology.
The nature of a paradigm shift is that it recasts all the existing laws of nature as special cases of a more powerful, more general model of reality.
Who knows, maybe we'll find that we actually are living in a simulation and then figure out how to hack the matrix. The idea of "travel" and "time" would become obsolete then; you'd just poke new values for your wave function into the simulation's RAM.
This seems implausible for many reasons, it seems more likely that if we're living in a simulation, we'll have trouble figuring out how to proloxify the feeblegarps.
Yes. The idea is that if we "break out" of whatever simulation we purportedly exist within, none of the concepts in the enclosing universe would make any sense to us.
(I happened to be watching How A Plumbus is Made when I wrote the comment, btw).
A radically different paradigm of physics would result in technologies that can't be imagined in the present paradigm of physics. For example nobody had even remotely guessed that transistors were possible until well in to the development of solid state theory.
we have plenty of unrealized technologies that can be imagined in the present paradigm but that we're not exploiting yet (see for example recent advances in 2D topological materials).
(based on my understanding of transistors, the first ones were conceived before the theory for them existed, and the first ones were built around the same time the quantum theory for them was expressed).
You don't send a message if the receiver can't understand it.
People need to have a point of contact with that extrapolated future to became a popular science fiction work, even the culture in the far future fiction is usually pretty similar to our own (or at least, the one of the moment where that book was written).
Present works (not the ones with inherited universes from old ones) are updated to our current expectations of the future, so you have sentient computers and other "possible" technology, and probably in 50 years we will have a different set of standards and not something as naive as what used to stand as possible 50 years before.
Why not better understanding of complexity and complex systems?
The assumption that any of these new technologies would be desirable and create a net positive effect in the world sounds very naive after seeing the results of something as simple as "connecting the world".
We need to have a better understanding of how new technologies interact with our existing technologies (including institutions and communities) and our environment, or else we risk (further) destabilizing everything that has allowed us to get this far.
personally I believe that improving the techniques we use to study complexity is the most important thing in science today. In many fields we are now drowned in tons of high quality data, yet scientists struggle to store, process, and turn that data into knowledge.
The origin of life, why life is evolvable, the evolution of the complexity in a eukaryote cell, and multicellular consciousness/intelligence are to me big unanswered questions in science.
Although, life can be reduced to chemistry and chemistry to physics I feel we are missing some high-level self-organizing principle of the universe.
Sorry, could you explain why you think life is not evolvable exactly? Assuming you take the existence of a single celled organism with DNA as a given (we still don't know the origin of life), evolution gets you the rest of the way rather nicely. Notably, "life" usually contains the assumption that it is evolvable as part of the definition. If the children of the organism can't adapt to the environment, we don't consider those things to be "alive" (e.g. a 3d printer that can print a copy of itself isn't alive).
As for the origin of life, all serious scientists are onboard with abiogenesis, though we don't know the mechanism. Every year, new science comes out showing how microfluid droplets with organic compounds + the natural environment, can result in behavior that looks similar to a cell.
For example, this one shows fairly interesting "cell like" movement without any life, and there was another last year that proposed a possible abiogenesis of cell walls through evaporation and organic compounds that suck up large molecules into the interior when evaporated.
We know that life is evolvable because life exists and we know the biochemical mechanisms involved (DNA + cellular biochemistry).
Evolution implies a relatively smooth path through "DNA space" from, say for example, an early single cell eukaryote to a mushroom.
However the search space is enormous. Even if we account for billions of years of evolution and a trillions of evolutionary experiments each year, a simple random walk with selection through DNA space should go nowhere because of the numbers involved. The curse of dimensionality[0] means there has to be some other principle of nature to make the search space yield a path from one viable life form to another. The search space of life would have to be 'smooth' in some sense. That 'smoothness' is something we don't understand.
If DNA space is just 256 bits (as a dramatic simplification), then 2^256 is a very very big space to search just by chance [1].
Now imagine a space orders of magnitude bigger.
This is an interesting way of framing the idea, but it's not a question of traveling in DNA space from some point (eukaryotic cell) to another specific point (mushroom): that would be very difficult in the way that you're talking about.
Imagine flipping a fair coin 256 times. The particular outcome ('HTTTTHHTTTTTTTTHTHHHTHHTHTHHHHH...') is extremely difficult to replicate, but getting any outcome is very easy: just flip the coins again. In this case we also have a lot of selection bias: all the paths through DNA space that don't result in intelligent life don't result in anyone having this conversation.
Regarding the curse of dimensionality: it's a statement about the available data rapidly becoming sparse in high dimensional spaces. It doesn't really say that high dimensional spaces are necessarily sparse, it's just hard to "fill" them in with the amount of data available.
Comparing evolution to a random walk with selection doesn’t quite sit right with me. In practice much of evolution occurs via gene duplication and recombination. At that point you can evolve complex changes very quickly. Evolving novel phenotypes is much easier if your starting material is an existing functional gene. Many motifs can be reused and reapplied.
Comparing a mule with it’s parents shows how much novelty can be produced in a single generation (in this case an evolutionary dead-end of course)
It's not really a random walk in any way. Having designed artificial evolutionary systems, even if you screw up the implementation and the search space is really bumpy, it usually still makes progress, albeit very slowly.
Physics might be bumping up against limits of knowledge, but I think it's quite short sighted to claim that this means all of Science is bumping up against fundamental limits.
Just because many of the questions we want to solve today are of practical significance (inventing new medicines, perhaps) doesn't make it any less scientific.
Indeed, almost 20 years after the Human Genome Project, we have only scratched the surface on how to understand what any particular genes are doing, and are very far from doing anything more than "hacking" on existing genes, let alone writing a biological program from the ground up.
I do wonder if every generation picks slightly higher hanging fruits in science will there come a time when a single human lifetime won't be enough to digest even the most specialised domain of science in order to build upon it
Right now, science has an emphasis on causal discovery. Showing that X is a mechanism by which Y happens. That includes finding the different X's for a Y and finding evidence for the relationship between a given X and Y. Once you know how a thing works, that doesn't necessarily make it easy to work with it. For example in quantum mechanics, a common phrase is "shut up and calculate" because the mental models are all messy.
But as we all know (especially those of us who have refactored many systems), every once in a while you find a new way of looking at a thing that makes it all much simpler. A geometric way to look at an algebraic thing, or vice versa. Or a unifying structure to combine disparate pieces. Or just a "wow that was dumb" undoing of unnecessary complexity. It makes further progress easier.
I could imagine that, as the boundaries of science get more complex, there will be more scientists working on making the rest of it less complex. Meanwhile, maybe we get smarter and live longer. The calculations involved with many areas of modern science have already outpaced what we can do by hand, but we invented computers, so I can take the mean of a zillion numbers without much effort and spend my time elsewhere.
And in med school, apparently they say "half of what we teach you will be false, but we don't know which half." As science progresses, you don't just add, you prune too.
Ever since I saw this animation [1] of the difference between a heliocentric vs geocentric view of the planets, I always see complexity in a new light. The geocentric movements are all technically correct (thats how they move in relation to the earth) but once the true state is discovered it becomes so much more simple.
> Meanwhile, maybe we get smarter and live longer. The calculations involved with many areas of modern science have already outpaced what we can do by hand, but we invented computers, so I can take the mean of a zillion numbers without much effort and spend my time elsewhere.
With software being as slow as it is despite massive speedups, and even despite despite massive speedups, we really are still not good enough at using our computers to their fullest capacity which still means getting insights into complexity before crunching the numbers.
Negate the operating environments that calculations are made in and the operating environments are not an issue, alright.
There definitely is a lot of bloat in the software world, but even large bioninformatics organizations have their own data-pipeline management teams to keep these issues in spec.
Nicely put. In fact a lot of the field of quantum foundations (and interpretations) could be seen as an attempt to work towards this sort of perspective/paradigm shift.
A sometimes-spoken hope is that the universe is built on relatively simple rules with complex behaviors. If fields grow too complex, they hope to find some underlying principle that explains (away) that complexity.
We started out that way, I dunno if we'll end that way.
And even if it does eventually come down to 30 rules that explain everything? How many rules does Chess have? Way more than Go, and both can take decades to really understand.
That's an interesting question. In Kuhn's view on science the tree would be replaced after a paradigm shift, meaning there is enough low-hanging fruit in a period of normal science (between two major breakthroughs). I wonder if Kuhn had any claims about whether paradigm shifts would happen less often in time.
Doubtful. Just look at computer science. How many people actually grasp what the computer is doing at the lowest level? Relatively few compared to the number of developers there are.
In my exposure to number theory, this certainly seemed to be approaching. Grad students barely have a grasp on the basic definitions; postdocs seemed shaky but familiar; professors "get it" but don't claim to understand. And sure, we've got Terry Tao making everything look easy, but that isn't a comfort.
Scientific "unified" models hide this complexity by abstracting it away so we can keep chipping away at the next level. We can navigate this hierarchy, up or down, to understand respectively large- and small-scale behavior, and the hierarchical model makes it so humans can still comprehend at a given level of abstraction.
We are still in an explosive growth mode, and probably will be for a very long time (>100 years is my guess). But yes, it will be asymptotic at some point, unless you think the amount of science to grasp is infinite.
Until we can create a copy of our own universe, I don’t think people will stop. It’s possible that in this pursuit we’ll keep discovering higher and levels of understanding or in other words that more answers will always lead to more questions.
I don’t think thats a huge worry. Science is pretty much like that now—no one is a walking encyclopedia. Studying all of known physics top to bottom would probably take you a few lifetimes.
Well written. One thing that I keep thinking is that even if you know the law, there's still a lot of applications where its use is unobvious.
For instance, you might be satisfied you know how a pendulum works. Now put another pendulum on it.
Or you think you understand gravity, because you got taught the inverse square law. And you then get Kepler's laws. But then with three bodies, things get really hairy.
Or you understand statics and materials. But how do we shove that into finite elements? Not an obvious thing, and required some real investigation.
There's also completely new ways of seeing things. Who would come up with information theory? Doesn't seem like something that would obviously be found, despite not really requiring any physical experiment.
And then there's things like algorithm research that turn out to be really big once there's a bit of computational power on the horizon. (Probably people think about the algo before they can try it on a machine.)
I would say the great problem of science right now is integrating all of the knowledge there is.
It's time scientists stopped publishing dumb weakly connected PDFs, and start switching to a GitHub like pull request model.
We could build a single strongly typed peer-reviewed repo of all of the world's scientific information, complete with definitions, experiment protocols and data, and make it universally downloadable and usable by all.
I think you're describing something like a hybrid social network/semantic web with versioning, knowledge provenance, and asynchronous peer review.
One serious problem for any such system is ontology selection: how is one to represent the entire body of scientific knowledge under a single type system? Different fields of inquiry make use of extremely diverse conceptual models. I suppose mathematics are in a way a unifying language, but there's hardly a single homogeneous mathematical discipline.
The present "weakly" connected network has almost zero technical barriers to entry. It uses well-established technology within a well-established workflow, and it offloads the hard, fuzzy work (e.g., all the model-binding that would presumably take place in the proposed system) to the most flexible computing device we know of: the brain. Everything is already freely downloadable/usable, for the most part (lots of research is open access, and what isn't can often be obtained from the investigators by request).
That said, maybe the sort of thing you describe could be translated into a research question. One could try to compare the shape of various data under different encodings, for instance (some sort of topological analysis?) to identify similar structure? I think category theory has been used to unify previously disparate regions of mathematics.
There are already a few entries in the social network/resource-sharing platform space. Have a look at Open Science Foundation. Academia and ResearchGate are similar, but without the materials-and-data-sharing.
> I think you're describing something like a hybrid social network/semantic web with versioning, knowledge provenance, and asynchronous peer review.
Great breakdown, thanks.
> One serious problem for any such system is ontology selection: how is one to represent the entire body of scientific knowledge under a single type system?
I think you can do it through market forces and forking. Similar to Linux distributions you could have "science" distributions. As for the type system and unifying language, I think you can do a thing that can start from just a dot and no dot and build up characters and numbers and words and types etc, with no extra parts. So perhaps if you had something like that, where simplicity could be rigorously defined, you could get consensus on base level types. If people had strong differences, you could go off and fork a new distribution. I'd imagine you'd have a few distros emerge with decent gravity.
Pull request could be really neat. I can imagine you'd have the speed of light defined somewhere in a science distro, and there would be data from reproducible experiments that people have done. Perhaps someone comes up with an ingenuous at home experiment that is just a few steps and sends a pull request that would add that, and perhaps prune some more complex experiment.
> offloads the hard, fuzzy work to the...brain
Yes, exactly. I'd love it if it was computable (of course, this isn't an original idea--Wolfram Alpha is trying to pull it off). If I could "go to definition" of any scientific conclusion (not only to definitions, but to real data, and simple, repeatable experiments). If you kept "going to definition" all roads eventually would lead back to 0 and 1.
> One could try to compare the shape of various data under different encodings, for instance (some sort of topological analysis?) to identify similar structure?
I'm giving it a go with Trees. I think it will work, but still might be a few years before I know for sure.
> There are already a few entries in the social network/resource-sharing platform space. Have a look at Open Science Foundation. Academia and ResearchGate are similar, but without the materials-and-data-sharing.
I like those, especially OSF. Definitely a lot of activity in the space (I invested in some new ones as well). I haven't seen one yet that does the "science monorepo" thing, but hoping someone takes the lead there (and I'll do my best to support it with hopefully useful underlying tech and research).
Who claimed software engineering has all the answers? OP is proposing a tool that could be used to help integrate scientific knowledge better than the disintegrated system of PDF’s that exists, what’s wrong with that?
I am attempting to imagine the horror of wading through all of the pull requests received by, say, the physics department. Requests were awful enough when postage stamps were required.
You underestimate the number of cranks who have "great ideas" and "just need someone else to work out the math."
>You underestimate the number of cranks who have "great ideas" and "just need someone else to work out the math."
You're not wrong, but, it is important to note there also isn't a shortage of great physicist who "just need someone else to work out the math."
For example in 1846, Faraday proposed that visible light is a form of electromagnetic radiation. But because he couldn’t back up the idea with mathematics, his colleagues ignored it. It took 18 years for Maxwell to come along and prove it.
This sentiment about math is so cringe, because it is the same type of prejudice of social class that Faraday himself fought against his whole life being the son of a poor smith. Not to mention physicists such as Carl Sagen, generally held in high esteem within the physics community, always preached of an eventual point in physics that transcends math (i.e. something more fundamental and basic) to describe the universe.
It isn't prejudice, or classism, or any -ism, it pure numbers. We have billions of people who have ideas about how the world works and a much smaller number of people who can work to disprove those ideas.
Camp out in an IRC channel like #physics on any network and prepare to be bombarded by idea people who just want someone else to do the heavy lifting, from math to the experiments. And woe betide those who want to say "by the way, your idea leads to perpetual motion/faster-than-light travel, so I will not bother." I personally have experienced soul-crushing numbers of philosophers who happen to think they've disproven special relativity who are also under the impression that the Michelson-Morley experiment was performed precisely once and everyone just sort of ... ran with it, never looking back.
Who has time to weed through this sort of thing? It isn't ideas that physicists lack for, not in the least.
>We have billions of people who have ideas about how the world works and a much smaller number of people who can work to disprove those ideas.
Like I said you aren't wrong...its just important to note, that some of the best minds in physics didn't have the math chops to prove their ideas. But if we ask why there are so many more people with ideas of how the world works and such a small number that can validate/disprove them speaks directly to classism. Being able to prove/disprove physics theories is generally, going to require a significant investment in education from early childhood that has been, and still is, out of reach for most. Its not a lack of intellect or talent, but lack of investment across the board.
In other words until the ideas are disproven you shouldn't call them a cranks simply on the basis they don't have the math chops to prove their own theories.
>Camp out in an IRC channel like #physics on any network and prepare to be bombarded by idea people who just want someone else to do the heavy lifting, from math to the experiments.
Seems to be a pretty efficient strategy. The entire point of this website is to support a similar model where YC is bombarded by investors who want someone else to do the heavy lifting, and business and make the returns.
No, it is not classism, it is finiteness of resources. The constraining factor is the number of experiments we can perform based on what we have. Those accelerators do not just whiff themselves into existence as fast as people have ideas. We could wave our CRISPR wand and produce an army of geniuses, we would still need those experimental setups to test out the ideas, and those costs both money and time.
Cranks are cranks. I will most definitely call them that and continue to do so. I no longer camp out like that because I could not bear it any longer. If you would like to spend your life attempting to work out the particulars of some FTL drive that supposedly works by repeatedly raising magnets above the Curie temperature and then lowering them back under it, have at it. Fire up IRC. I suspect you will spend much time laboring to support the ideas of cranks because it simply is not a good use of your time. It was a good use not of my time, either.
That's all it is -- efficient allocation of limited resources. My time, your time, someone else's time. How are these decisions made? How do we decide which of the ideas do we examine first?
If it is "possible greatest payout," then we would spend all of our collective time on perpetual motion devices. They would, after all, be the greatest payout. And yet the patent office won't even look at them.
No, our first filter is: can this be tested? And to test, we must measure. To measure, we must calculate. And there is our math.
Good ideas will bubble up from the bottom, and more than one person will have a good idea. If one of those people does not have the math and another does, then science will eventually get around to the person who has the math.
What's your algorithm for deciding whose ideas get worked on? I bet that it has some kind of criteria attached to it. I doubt you are suggesting selecting humans from across the planet purely at random and asking for their scientific ideas.
Simply put, this is the scientific method. Make a new, better scientific method if you have a better (by whose standards?) algorithm for deciding whose ideas are worth examining first.
>No, it is not classism, it is finiteness of resources.
Education is not a finite resource (I think you know that and hence you changed the goal post from math to experiments).
Nevertheless, when those that have the resources look down on those without (calling them cranks), based not on the merit of their ideas but based on the lack of resources to prove the ideas...that is classism.
You cannot do the experiments without math. We both know this. We must have a predicted value for an experiment. We must use math to create the experimental apparatus. We must use math to examine our results and to look for acceptable error bars.
Education is absolutely a finite resource. We have finite universities and finite educators. The lifetime required to attain an education is also a finite resource, as you simply cannot have a workable society while also requiring that everyone get a PhD in anything they have an "idea" about.
Limitations about. We must make choices against them. We could do quite a lot of particle research should we decide to disassemble the solar system and re-purpose it into an accelerator, yet I will gently suggest that this proposal will not achieve much traction.
However, knock yourself out. You can manage to join IRC and fight against classism by spending your time working to support the ideas of people whom you will not call cranks. Prove me wrong by doing it for the next ten years. Time isn't a finite resource, right?
You still will not engage with the most basic thrust of this: we cannot entertain everyone's ideas simultaneously and decisions must be made as to which are examined first. Anything but that is some form of -ism because you have a selection criteria that might ignore an idea.
So we are left having to come up with some kind of heuristic to examine some ideas and not others. This is the scientific method. Is your idea testable? And if you claim it is, what values will we measure that are different from what currently exists?
Propose your alternate method. Then, tell me how you are going to exclude the people who, say, want to glue crystals to engine exteriors to improve the combustion efficiency, without anything seeming even a trifle discriminatory.
Is there perhaps a way to separate the behavior of someone from your term for them in general? There could be "cranky behavior", but perhaps these are otherwise normal people who are just amateurs in the field of physics (or other subject) who don't yet know their hand from their foot.
Might there be a constructive way to benefit from the comments provided by cranky behavior? I'm not suggesting taking direction from folks with no experience, but perhaps cataloging the comments on this IRC channel to see what the distribution is.
Perhaps don't think too hard about the solutions proposed in these comments, but instead what problem areas do they fall into. And then from that perhaps there's an opportunity, if not for new research, for creating some better synthesized educational resource that might help people get up to speed faster.
That will not help. I am talking about people who refused to draw even the most basic diagrams or perform high school algebra. Also, any result or experiment leading to their pet idea not working was immediately rejected. It's little more than "someone else needs to do the heavy lifting to prove me right." They aren't going to do anything more than restate their idea, again and again, and then be angry that nobody is rushing to have large universities working on prototypes.
John Baez has a lovely "Crackpot Index" that is an excellent jumping off point for a description of your average crank contact. It would be different for IRC but not dissimilar.
I know you're trying to give people the benefit of the doubt, but experience hasn't shown that it is worth it or even feasible.
As I said before, even the patent office has given up on perpetual motion machines.
I'd rather say he's criminally underrated in Spain.
It is definitely easier to hear casually about Ramon y Cajal in "anglo countries" than in Spain. For example, I have spent my childhood in the spanish state, and I first heard about Ramon y Cajal during the first conference that I attended, in Switzerland, from a lovely presentation by an English professor.
One of the dramatically few spanish first-rate scientists, and he's not a household name. Very, very sad state of affairs.
I do not know what you are talking about. We study Ramón y Cajal in school, the most important grants in Spain are named after him, there is a Ramon y Cajal square or street in every city... Even the most ignorant Spaniard knows him and will tell you that he is our most respected scientist from all time.
Science, in general, is criminally underrated in Spain, but Ramón y Cajal is literally the household name.
Just a quick note to the moderators: thanks for adding (1897) to the title and clearing up the confusion that i caused! i assumed Ramon y Cajal was more of a household name.
> It is nevertheless true that if we arrived on the scene too late for certain problems, we were also born too early to help solve others.
I think this could be used to describe almost any point in history though. The greatest discoveries in science have always required massive breakthroughs in thinking, that typically defy conventional intuition. Perhaps there are some rare moments in time following a major discovery where the fruitful areas of inquiry seem obvious. But “I don’t even know where to start looking for the next major scientific discovery” or “this hypothesis might be wrong and we could potentially spend the rest of time investigating it” seems to be the default state of trying to make major breakthroughs in science.
What a contrast with Albert A. Michelson, speaking in 1894:
>most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.
Right. "Who, a few short years ago, would have suspected that light and heat still held scientific secrets in reserve? Nevertheless, we now have argon in the atmosphere, the x-rays of Roentgen, and the radium of the Curies, all of which illustrate the inadequacy of our former methods, and the prematurity of our former syntheses." That had to be from the early 20th century.
The problems today are either in areas where complexity is the limiting factor, like biology, or beyond current experimental reach, like string theory and dark matter. The complexity problem can probably be overcome with computer assistance. Experimental reach is harder.
I was going to post a criticism along the lines that all the historical examples given by the author were from the 19th Century until I saw you point this out!
The date definitely changes my perspective but I still think the essay is a little too waffley - it doesn't to give any actual examples or indications or where the author thinks important scientific problems lie. In fact it kind of begs the question. In response to a concern over whether there are important scientific problems left to solve, it simply lists some historical important scientific breakthroughs.
I suppose the point is that breakthroughs are unexpected...
Why come up with new ideas and solve hard problems when you can just be the User for X and become a unicorn based no nothing but smoke, mirrors and clever accounting like WeWork?
"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."
It's true that moderation isn't consistent, but that's not because it's selective in the way you imply. Rather, it's because we can't come close to reading everything, and can't moderate what we don't see. If you notice a post that breaks the site guidelines and hasn't been moderated, the likeliest explanation is that we haven't seen it yet.
A few years after this was written, Planck proposed energy quanta. And in 1905, Einstein published his four Annus Mirabilis papers, introducing the photoelectric effect (applying quantum), special relativity, and the mass-energy relationship.