Hacker News new | past | comments | ask | show | jobs | submit | herendin2's comments login

If I got the math right, then about 1 in every 32,000 stars in the universe goes supernova each year. That's scary. But I think I'm getting the math very wrong.

edit: I guess my error might be related to confusing a probability factor with the number of incidents in a period.

edit: The right answer is probably up to 1 in every 10bn stars go supernovae in the universe each year (or 1 in 10bn die and a fraction are supernovae). Thanks: yzydserd and zild3d


A star "lasts" about 10 billion years, so you'd expect about 1 in 10 billion stars to 'die' each year, but only a tiny proportion (the very largest) go supernova.

Numbers are huge. Even tiny ratios mean something like 10-100 stars go supernova every single second somewhere in the universe.

Sounds a lot? Only about 1 star per galaxy goes supernova per century. A lot of galaxies.

Mindblowing.


The lifespan of stars varies a lot by type and size, with largest stars having a very short life-span of maybe a few dozen million of years and small ones up to dozens of billions of years. I'm not sure what the average is.


> A star "lasts" about 10 billion years, so you'd expect about 1 in 10 billion stars to 'die' each year, but only a tiny proportion (the very largest) go supernova.

This analysis really doesn't work. Star lifespan is inversely correlated to size. A star large enough to just barely go supernova is only going to live for ~100M years, and as they get bigger, the lifespans fall rapidly.

(Why? Because gravity is what provides the pressure for fusion to happen, and so more gravity means fusion happens faster. For large stars, the luminosity is something like the mass to the 3.5th power. Also, convection works less well for larger stars, so as stars grow bigger, ever smaller proportion of the star takes any part in the fusion reactions in the core.)


So only 0.12% of all main sequence stars, have the mass that can become the most common type of supernova, and they apparently only last for about 100 million years.


Wouldn’t the creation dates of stars be clustered around certain points in time. So the supernovas should also happen in groups?


what's the rate of Type Ia supernovas? Higher I would guess? (n>=2-aries are common and medium mass main sequence stars are common, though it takes them a while to get to white dwarf)



He mentioned a rough estimate of one per century per galaxy. Estimate for average stars per galaxy is 100 million, which would be 1 in 10 billion stars every year


> If got the math right, then about 1 in every 32,000 stars in the universe goes supernova each year

Can’t be right, can it? It would make the Sun (over 4 billion years old) an enormous outlier.

It also would mean stars, on average, do not get very old. Over 10% of the stars that the ancient Greeks saw in the sky would have to have gone supernova since then.


> Can’t be right, can it? It would make the Sun (over 4 billion years old) an enormous outlier.

Yes. That fact that I'm thinking made me think I was certainly wrong


Not all stars can go supernova. Sol will never go supernova. Only very massive stars can—or stars that become very massive by absorbing other stars.


Binary white dwarf systems can also go supernova, even if the combined mass is not that large as far as stars go.


Isn’t the answer infinity? We don’t know what’s beyond observed part of universe, and there’s infinity number of universes. If our emerged then there’s others.


There is no reason to expect any particular number of universes. We've observed exactly one, this one, which had to exist or else we wouldn't be here to observe that it existed.

Our universe is finite, so although it is unbounded (lacks edges) there aren't an infinite number of anything in it, galaxies, stars, M&Ms, grains of sand, atoms of hydrogen all finite.


Has that really been established? The observable universe is finite, yes, but I wouldn't think that automatically implied that the universe as a whole is.


Simply put we can't know and we can never know if the universe is flat. Now, if the universe has a curvature then we could use that as a baseline for the size of the universe, but as of so far we've not detected one.


> and there’s infinity number of universes

There is no evidence that there are a infinite number of universes. All we know of is the one we exist in. The many worlds interpretation of quantum mechanics posits that there are a very large number of non-interacting "worlds" which may or may not be the same as "universes".

And if you meant "infinity number of galaxies" then that would require an infinite-size universe, and we don't know if that is the case for our universe. It could be, or it could be finite but unbounded.


Yes we don’t know if other universes exist. So it’s 50/50 infinity or one. Then if our universe came into existence, then probability is not 50/50, because we know that something exists, therefore something else is more likely to exist, probability towards infinity.

If you were observer of emptiness and no universe or anything existing then you would say it’s more likely there will be nothing, so probability towards zero.

Not to forget the recursion. There’s likely universes within our elementary particles or our universe is a particle in parent one.


> There’s likely universes within our elementary particles or our universe is a particle in parent one.

This is a very nonstandard use of the word "likely".


The recursion dimension can not be only having something in this narrow band where we are. There can’t be nothing if you go deeper than subatomic level. Because it has to be made of something in order to support larger recursion.


Please don't present your fantasy as anything to do with actual established science, please.


probability does not work that way


Actually I think it might? If I describe an arbitrary hypothetical object to you is it more likely that it exists or doesn't exist? How does that compare to the case where I present you with a single example of an object and ask you to guess if others that are substantially similar to it exist?

You have so little information that any estimate is effectively arbitrary. Nonetheless I think there's a clear statistical bias between the two choices in both cases.


$175M quickly becomes meaningless in this context, believe me. It's already much more than you need.


That research was published before Bolt hacked another 0.1 seconds off the record, when he cut it down in one event from 9.69 to 9.58

I don’t think the research stands up.


They would certainly have to add that new data point to their analysis. :-)

I expect it moves the estimate down by less than the 100 ms record beat, though. That's how converging to a true minimum tends to work (at least, like I said before, in the purely statistical modeling sense which is never quite as strong as a detailed micro-model).


The problem is, the stats ignores the evolving surfaces and shoe tech.


Unsure how either of my comments could be read as disagreeing with your point - both mention physics/micro-models being stronger than pure stats, but maybe in spite of your "The problem is" phrasing you only meant to amplify or only read quickly. That said, I can perhaps respond with something amplifying & clarifying of our shared skepticism of the pure statistics approach relative to something "more detailed" that might someday somehow help someone.

There are probably a half dozen micro-model effects even non-experts could rattle off that have "trended" over the decades from your shoes & surfaces to various aspects of population diet, young-in-life identification / more-optimized-maturation conditions and on & on. Statisticians call this a "non-stationary sampling process" meaning the independent & identically distributed (IID) assumption is at best a weak approximation and at worst totally misleading.

Ways to measure how much evidence there is that IID / other distributional assumptions are failing do exist { such as some of the ones here: https://github.com/c-blake/fitl/blob/main/fitl/gof.nim and referenced at the bottom of https://github.com/c-blake/bu/blob/main/doc/edplot.md (at the stage where one "plots / pools together multiple data points into some kind of "sample" with a "distribution") }. Sadly, few test such assumptions (which are rarely truly comprehensive anyway) and even small departures from modeling assumptions may lead to relatively large errors in estimates. E.g., the linked to Einmahl 2009/2010 research states this as an assumption to apply the ideas, but then shows no test of that assumption on the used data.


How do you know Bolt isn't close to the maximum? He is definitely an outlier.


I don’t see that there’s any good text to vid generator for your needs currently

They’re restricted or not good enough

Wait a month or three


Couldn’t it be as simple as a face recognition unlock?

There’s an option on iPhone’s FaceID to require eyes open and looking at phone to unlock, but this might have been deselected. Or it could be subverted.


Has there been a demonstrated Face ID spoof?

Would be interesting if they had tech that could fool it based on known photos.


Why would they use photos when they have a perfectly good corpse on hand?


The Starlink satellites were not released into the correct orbit. SpaceX is using their ion thrusters to try to raise the orbit, but Elon Musk said this isn’t likely to succeed.

Could SpaceX use one satellite to push another?

This procedure would deliberately sacrifice the pushing satellite, but maybe give the pushed satellite enough additional delta-v to reach a working orbit.


Once they are deployed, they aren’t connected to each other anymore and there is no way to perfectly align two satellites again. I’m assuming they don’t even have RCS they could achieve accurate translational movement with. If you push one satellite against another and they aren’t perfectly aligned so the thrust vector goes trough the combined CG, they will start to spin immediately.


I agree that alignment would be a massive challenge. There aren’t even cameras on the satellites to help with visual alignment. There’s also the possibility of damage from the pushing or the ion exhaust, more so if the imperfect alignment sets them spinning


The conclusion, if my understanding is correct, is that it is, but other cryptocurrencies may not be

Quote:

First, coins like Bitcoin, which are based on the proof-of-work algorithm, are likely more energy intensive than mainstream finance. (But note that this intensity seems to be declining with time.) Second, coins like Ethereum, which are based on the proof-of-stake algorithm, are likely (far) less energy intensive than mainstream finance.

So that’s what the data says. But somehow, I feel like this evidence will satisfy neither the crypto critics nor the crypto advocates. And that’s why in the appendix, I add some obligatory speculation. But for now, let’s conclude with just the facts. To date, it seems clear that Satoshi’s claims about Bitcoin’s superior ‘efficiency’ have not come to fruition.


Have you heard about mining pools? I think the future of PoW coins is huge payment commissions and payment pools which make the commission acceptable, this is what needed to make Satoshi's claim a reality.


I think I understand how a mining pool works. How would "huge payment commissions and payment pools" reduce the high PoW energy costs which the research claims? (I won't say the theory in the original post is correct. I am agnostic about that, as it's a hard equation to resolve)


More txs per 1MB which effectively means more txs per watt.



Quant was her real name, and it's an interesting one


It's a not exactly a common name in Germany, but also not unusual. The owner family of BMW is called Quandt (with a slight spelling variation) for example.


(1988), not 1899


I wonder if the average building even was 13 floors tall in 1899, particularly in South America which didn't have its "tall buildings boom" much later.


:/


\:


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: