"Why stop there? Why not say we even know that there are billions of years? Why not thousands?"
My questions above are only to point out what I think is the conclusion to your line of questioning. Only the ignorant would suggest that we know exactly what happened in the past. Let me emphasize, that's not the aim of science.
Regardless, we can argue limitations on what is possible in the past from surrounding evidence, which you're surely aware of based off your questioning. You definitely sound like you would be willing to defend some kind of supernatural intervention in the past at least as a possibility, which at least in theory I have no problems with. The issue is your reasoning would at a minimum ask you to posit what you think is actually defensible about the past, and how you would reach that position. Ultimately we have to use inductive tools, and the natural surroundings are the only things we have to measure against. Your position sounds as though there are things that are worth defending about the past, they just don't align with commonly accepted mechanisms.
Speed of light, river erosion, uranium-lead dating, etc. are very sound in describing what we see around us, you'd need to explain why they don't suggest sound limitations on the past.
It's not that scientists can be incorrect in concluding ages from these tools, but what you posit should be more predictive if we're to conclude your position is the sound one.
> Regardless, we can argue limitations on what is possible in the past from surrounding evidence
We can, but none of that is verifiable. We can come up with hypotheses and test them. We can look at literature which observed things in the past or make truth claims and evaluate them based on evidence.
> Speed of light, river erosion, uranium-lead dating, etc. are very sound in describing what we see around us, you'd need to explain why they don't suggest sound limitations on the past.
I am not aware of any particular claims which argue the veracity of relative measurements (say, maybe river erosion, since I'm not familiar with that).
> It's not that scientists can be incorrect in concluding ages from these tools,
My issue really has more to do with the scientists which demand you accept their theories and laugh off "supernatural intervention" when at the end of the day the vast majority of what they believe is based not on fact but on conjecture (blind faith?). They laugh off one religion and replace it with another.
> We can, but none of that is verifiable. We can come up with hypotheses and test them. We can look at literature which observed things in the past or make truth claims and evaluate them based on evidence.
Verifiability isn't really the goal of those suggesting limitations of the past based on scientific analysis of the present, as like I said, it's impossible, and scientists that would suggest otherwise should in fact be laughed out of the room. Science is in fact based on verifiability, or at least we can take it as an axiom for this conversation, but the suggestion that we should take evidence of something that is a testimony of the supernatural, when all we have is the natural to see around us, is where I get confused. The point isn't that scientists have to be correct about the past, but they at least can show their reasoning based on something that we can see, their "faith as you call it" is in fact not blind, because the processes behave consistently right in front of their eyes in the lab or in the field. The evidence you mention is well established for most studies as to the age of the earth, what isn't established is any kind of visible evidence to an alternative. Literature is certainly an observation, but if we're suggesting that the supernatural claims made in that literature have evidence, it needs to be substantiated, and I'm ignorant of any major work showing evidence of the supernatural. Doesn't mean it doesn't exist, but it certainly hasn't crossed my desk, and I actively search for it.
> I am not aware of any particular claims which argue the veracity of relative measurements (say, maybe river erosion, since I'm not familiar with that).
I'm not sure what relative means here, would you mean something akin to river erosion in a given setting is something that would in fact be subject to change depending on environmental factors?
> My issue really has more to do with the scientists which demand you accept their theories and laugh off "supernatural intervention" when at the end of the day the vast majority of what they believe is based not on fact but on conjecture (blind faith?). They laugh off one religion and replace it with another.
I think to call it religion is a touch dubious, as they aren't creating a focus of worship. Historically the supernatural is intended when using the word religion, I think ideology or philosophy would be more accurate, and that's only those making positive claims as opposed to pragmatists. It is conjecture, just conjecture that is founded upon evidence that is readily demonstrable, rather than accounts of the supernatural that don't seem to be replicable.
I'm finding the deeper I study biology, the more certain I am that complex models with both classical and quantum parameters will eventually be able to predict the overwhelming majority of macromolecular behavior such as protein folding and DNA recombination.
Once you start dealing with concepts bigger than that you get into another mathematical description with Markov chain style models for cellular proliferation, followed by network analysis for tissue growth.
You can take that up further and further, I'm sure you're somewhat familar.
My question is, even if you have some examples, what do you find to be some kind of theoretical limit to the modelling that would actually be accurate?
Not a limit to the accuracy, that must simply always exist, but a limit to what can be successfully modeled at least to "acceptably correct" for use in some application?
In biology: morphology of organisms, evolution, ecology. And those deal with systems, so they use a lot of math. But interspersed with the math, you always find natural language descriptions, definitions, explanations, which are necessary for understanding and complete modelling of the theory. These make reference to the shared human experience of the world, and are not formalized in logic. Not that they cannot be, or at least so I hope. But we're very far from it today, that's what I mean.
Maybe relatedly, humans think of the world in fuzzy terms. At some point we're going to need a system for formalizing fuzzy thought, and no, fuzzy logic is not it, because that's just a continuous extension to boolean logic. Human thinking is fuzzy beyond that. But, as a computational linguist, I sometimes worry that we already have that system: natural languages!
I don't think was specific enough, I guess what I'm looking for is something that we can describe with language that doesn't have at least some sort of parameterization in regards to physics.
So take the original reaction of DNA from just inorganics, I typed those words, but have no reference for what the model actually is. What I do however have, is words for each of those things, and a set of impossibilities for what it could "not" mean.
However, the reference is not born out in terms of nothing, each of those words has a set of things that we do have models for, we have models for atoms, reactions, DNA, etc.
So in reality the sentence describes something that we simply can't point to specifics on, but is in no way "unexplainable" in terms of its logic.
Another example would be dark matter, we use those words, but really they just stand for a set of observations, empirical measurements just operating outside of the patterns we are used to, but certainly not without something to point to.
If there's some shared experience that we can't express logically, I'm at least personally unfamiliar with it, I would need some further understanding of what you have in mind.
I could also be wildly misreading what you mean, semantics are not my favorite over text.
I'd be interested to hear your opinion defending professional sports taking itself as seriously as it does then. From my perspective, the only thing that makes it serious is the amount of money and advertising involved.
Sure the technological barrier is important, but I fail to see how they're categorically different depending on what they require of the participants.
I don't have an issue saying they both should not be taken seriously, or that both should be taken seriously, but to draw a line between them in terms of "seriousness" I think needs elaborating to me.
> I'd be interested to hear your opinion defending professional sports taking itself as seriously as it does then.
Also stupid. Athlete worship is as bad or worse than celebrity worship. The obsession with and tribalism from professional sports has creates some of the most obnoxious bores in the world. All for a bunch of people kicking or throwing a ball around.
I don't think it's too negative. The actual complexity is high for the modeling, but very few of the abstractions are any worse than other many body simulations, unless you're studying specifically black holes or other theoretical edge phenomena, in which case it becomes hard to use the term "astro" instead of just theoretical physicist. Arguably magnetohydrodynamics has some difficult components, but it's not specific to cosmology.
Clickbait titles revolve around sounds and language though, so I don't know if there's a way to combat what's profitable and not strictly speaking misleading.
All simulations have to make the Born-Oppenheimer approximation, nuclei have to be treated as frozen, otherwise electrons don't have a reference point.
There will never be true knowledge of both a particle's location and momentum a la uncertainty principle, and will always have to be estimated.
But, for a system of two quantum particles which interact according to a central potential, you can express this using two quantum non-interacting particles one of which corresponds to the center of mass of the two, and the other of which corresponds to the relative position, I think?
And, like, there is still uncertainty about the position of the "center of mass" pretend particle, as well as for the position of the "displacement" pretend particle.
(the operators describing these pretend particles can be constructed in terms of the operators describing the actual particles, and visa versa.)
I don't know for sure if this works for many electrons around a nucleus, but I think it is rather likely that it should work as well.
Main thing that seems unclear to me is what the mass of the pretend particles would be in the many electrons case.
Oh, also, presumably the different pretend particles would be interacting in this case (though probably just the ones that don't correspond to the center of mass interacting with each-other, not interacting with the one that does represent the center of mass?)
So, I'm not convinced of the "nuclei have to be treated as frozen, otherwise electrons don't have a reference point" claim.
Not if you're going off of ab initio theory such as Hartee Fock, MP2, CC, etc. We're talking amounts of matrix multiplication that wouldn't be enough to finish calculating this decade, even if you had parallel access to all top 500 supercomputers, you get bigger than a single protein, it's beyond universal time scales with current implementations.
Every time some computer scientist interviews me and shows off their O(n) knowledge (it's always an o(n) solution to a naive o(n**2) problem!) I mention that in the Real World, engineers routinely do O(n**7) calculations (n==number of basis functions) on tiny systems (up to about 50 atoms, maybe 100 now?) and if they'd like to help it would be nice to have better, faster approximations that are n**2 or better. Unfrotunately, the process of going from computer scientist to expert in QM is entirely nontrivial so most of them do ads ML instead
How on Earth? I can't imagine the difference between the computational power of all top 500 supercomputers is THAT many orders of magnitude far off from the computational power of all the folding@home computational power donated by the general public.
supercomputers are specialized products with fast networking to enable real-time updates between nodes. The total node count is limited by the cost of the interconnect to get near-peak performance. You typically run one very large simulation for a long period of simulation time. folding@home doesn't have the luxury of fast networks, jut lots of CPU/GPU. They run many smaller simulations for shorter times, then collect the results and do stats on them.
I looked at the various approaches and sided with folding@home. At one point I had 1 million fast CPU cores running gromacs.
When I say ab initio I mean "classical newtonian force field with approximate classical terms derived from QM", AKA something like https://ambermd.org/AmberModels.php
Other people use ab initio very differently (for example, since you said "level of theory" I think you mean basis set). I don't think something like QM levels of theory provide a great deal of value on top of classical (and at a significant computational cost), but I do like 6-31g* as a simple set.
Other people use ab initio very differently. For example, CASP, the protein structure prediction, uses ab initio very loosely to me: "some level of classicial force field, not using any explicit constraints derived from homology or fragment similarity" which typically involves a really simplified or parameterized function (ROSETTA).
Personally I don't think atomistic simulations of cells really provide a lot of extra value for the detail. I would isntead treat cell objects as centroids with mass and "agent properties" ("sticks to this other type of protein for ~1 microsecond"). A single ribosome is a single entity, even if in reality it's made up of 100 proteins and RNAs, and the cell membrane is modelled as a stretchy sheet enclosing an incompressible liquid.
Level of theory as it relates to an-initio QM calculations usually indicates Hartee Fock, MP2 and so on, then the basis set gets specified after.
I also agree that QM doesn't provide much for the cost at this scale, I just wish the term ab initio would be left to QM folks, as everything else is largely just the parameterization you mentioned.
The systemn I work with, AMBER, explains how individual classical terms are derived: https://ambermd.org/tutorials/advanced/tutorial1/section1.ht... which appears te be MP2/6-31g* (sorry, I never delved deeply into the QM parts). Once those terms are derived, along with various approximated charges (classical fields usually just treat any charge as point-centered on the nucleus, which isn't great for stuff like polarizable bonds), everything is purely classical springs and dihedrals and interatomic potentials based on distance.
I am more than happy to use "ab initio" purely for QM, but unfortunately the term is used widely in protein folding and structure prediction. I've talked exdtensively with David Baker and John Moulton to get them to stop, but they won't.
Sure. But in the protein structure prediction field, "ab initio" is used to mean "structure was predicted with no homology or other similarity information" even though the force fields incorporate an enormous amount of protein structural knowledge.
I'm very optimistic about RISCV adoption, I'd wager this is the year the major distributions start preparing to host stable releases for it, or at least Fedora. There's a Guix developer making pretty decent progress on the port. https://tooot.im/@efraim/tagged/guix
I'm also optimistic, and I think starting with one of these porting processes often leads to an interesting set of yak shaves. I've been following the progress of the most active NixOS developer:
What's your broader economic framework for this analysis? From your comment history georgist/LVT adjacent. But the responses seem very well connected and elaborate beyond that with which I'm familiar with. On top of a name for the system, what books/wikipedia pages/journal articles would be available to read about it?
https://www.thinkpenguin.com/gnu-linux/wireless-n-dual-band-...