Hacker News new | past | comments | ask | show | jobs | submit login

   precisely known transitions 
There is an interesting philosophical conundrum here, in that one could argue it's the other way around: the precision you mention is a consequence of the fact that we currently define time relative to atomic transitions (the Caesium standard [1]). So if atomic transitions fluctuated (relative to some abstract standard that we are currently not having access to), then this would not affect the precision you mentioned. Wittgenstein famously made a similar argument about length in the Philosophical Investigations §50: "There is one thing of which one can say neither that it is one metre long, nor that it is not one metre long, and that is the standard metre in Paris. – But this is, of course, not to ascribe any extraordinary property to it, but only to mark its peculiar role in the language-game of measuring with a metre-rule."

[1] https://en.wikipedia.org/wiki/Caesium_standard




Look, I think philosophy is important and fundamental, and yes philosophy is a language game. What I don't like is how most philosophers don't actually contribute, in other words don't actually do philosophy.

The most philosophical thing I have seen is formal verification software. Like metamath.

Philosophers could easily gain a lot of my respect if they switched from natural language philosophy, to slowly formalizing physics, law, norms and values, natural language dictionaries into actual formal concepts, in a collaborative way, so that all philosophy can actually be integrated into a theory.

Back to the clocks, what you claim is patently false. It is perfectly possible given 2 types of clock A and B to assess which type of clock is more precise:

Let's model an imprecise clock as one that reports as time passed: the actual time passed plus an error term. The error term undergoes a random walk, or diffusion.

This means all you have to do is make an ensemble of 2 (or more) clocks of type A: A1 and A2, and similarily 2 (or more) clocks of type B: B1 and B2, reset all clocks and then observe for which type of clock X we have a smaller difference between X1 and X2, as time passes.

if the difference in reported time between A1 and A2 wanders away from 0 slower than the difference in reported time betwween B1 and B2 then you know clock type A is more precise.


The clock drift of A1 and A2 could be correlated for some reason that we don't currently understand.


That is true, but you did say precision, not accuracy.

I assumed you accurately chose your wording, when you used the word "precise".


It's mostly a philosophical conundrum, not a practical one.

For any given definition, there are a handful of possible "defining experiments" being carried out at a handful of national labs. Their deviations with respect to each other are constantly being monitored and metrologists are constantly chasing down error terms. Yes, they choose one standard to "bless" as the official definition, but it's their job to live below that abstraction and to maintain it. If their "blessed" definition started to drift with respect to the other candidate definitions they would notice very quickly and react appropriately. The fact that most of us entertain a single definition isn't a consequence of philosophical confusion as to whether or not one exists, it's a consequence of delegating the ongoing experiments backing our simplifying assumptions to a group of people who are very good at them.

For instance, the disagreement between solar time and atomic time is monitored so closely that the slowing of Earth's rotation due to tidal forces is a gigantic signal compared to measurement noise:

https://en.wikipedia.org/wiki/Leap_second#/media/File:Deviat...

If, say, Earth flew through a cloud of dark matter that somehow messed with Cs absorption lines, we would see Cs clocks drift with respect to Rb clocks, unlocked quartz clocks, Earth rotation, Optical Lattice Clocks, etc, etc. The event would not go unnoticed. Cs clocks would be demoted from their position as primary time standards and the next best candidate would be promoted.


   somehow messed with Cs 
   absorption line
In a simple case yes, but if all other time keeping mechanisms would also be messed with by dark matter, there might be no way of saying which one is the right own.


So, the thing about all the definitions now is there are no prototypes (the kilogram was the last one). So Wittgenstein's observation now becomes an actual claim about how our universe works.

It feels intuitively obvious/ redundant that a prototype metre is one metre long, but a statement that light in a vacuum travels a fixed distance in one second is not so obviously redundant. It doesn't matter what colour the light is? Nope. It doesn't matter when I measure? Apparently not.


You cannot measure how fast light travels in a fixed time unit without reference to time: you need to define length! Length is currently defined with reference to caesium time: the 2019 SI definition of metre takes "the fixed numerical value of the speed of light in vacuum c to be 299792458 when expressed in the unit m⋅s−1, where the second is defined in terms of the caesium frequency ΔνCs." (From [1].)

As far as I can see, measuring time is the foundation of all definitions of other units. And the core reason why time is used to define everything else, is pragmatic: it's just technically easier to count (photon absorption) than to do anything else.

[1] https://en.wikipedia.org/wiki/2019_redefinition_of_the_SI_ba...


It may matter how fast you're moving non-inertially and how massive nearby objects are. (General and Special Relativity) And then, there might be advanced quantum effects we don't know about yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: