Classic 2024 company. Started out with a few genuinely cool products and a quirky, fun attitude. Got popular, decided to exploit that good will and now everything they do seems greedy and inauthentic (regardless of whether any specific product is or not). Maybe this is a great product? But after the OP Field pricing shenanigans and OP-Z quality issues... hard to give them a second chance.
I don't know all the details, but I think they are starting to do novel work with the accelerator to produce neutrinos in an experiment called Dune. So there actually is important stuff being done that isn't just superceded by Cern.
Yes, they are squeezing as much juice as they can out of the lemon. But there's diminishing returns. It can't continue forever, and the budget will be under inexorable pressure as the scientific returns become increasingly marginal.
This is a problem with science, particularly particle physics: discovering is like mining. Eventually the "ore body" of new phenomena is used up. The period of rapid discovery (up to say the mid 1970s, with a decreasing trickle to 2012 with the discovery of the Higgs Boson) in particle physics will look very atypical and transitory in historical hindsight.
To be honest, you can't know exactly where you are going to find things. I think the rapid discoveries that particle physics were known for within the last centuries maybe are the limits of our current technology. It is not that we lack the ideas, it maybe that we have squeezed the remaining bits of hanging fruits left.
When considering how vacuum tubes were used for a lot of discoveries in physics (nuclear physics in particular) and compare it with the technologies we had to use for things after that you will find a huge gap. Maybe we are pushing the limits of our current technology but the problem is that we don't know. If we know then it wouldn't be research. The only way to find out if something is going to work is at least do R&D work and design a plan and maybe execute it. In the current scheme this is expensive projects. Although if you divide the cost per personnel you will find that it is comparable to other fields.
Lets not talk about how the R&D benefit whole industries because this is a cliche by now (although it is a valid argument in itself). But the only way to advance is to try new things. When people actually say a crisis in particle physics, they usually mean in terms that we did not discover the whole big new thing. But each day we gain more understanding on details of standard model and how interactions work. These things are not sexy enough to be reported by the mainstream media. I post some of these on HN and rarely they get any traction and I get why. It is usually hard to explain and very specialized. But this is not excuse for people to claim that we are not advancing.
That sounds right to me. I don't work in physics, but have many colleagues who have left it for that reason.
~The said, I think Dune still does seem useful. Hard to say whether the cost overruns are a lack of good management or if the experiment is just a quagmire, though. Guess I lean towards the scientists here that it's the former~
EDIT:
Doing some research it seems the Hyper K experiment might just be better, and funding might be better directed there than Fermilab.
The larger scale models aren't necessarily built on the smaller models, they are independent and generally less specific. They are probably even inputs to the more specific models. So, like rising sea levels and increased temps (on average) is basically guaranteed, the _exact_ change for a particular place is more uncertain.
Neil deGrasse Tyson in his Cosmos show described it like a person walking a dog. We don't know where the dog will be exactly, but we can see the person with the leash is moving in one direction, and the dog isn't going to ever get too far from him (because of the leash).
But the uncertainty in the dog analogy is the length of the leash. If we don't know the length of the leash, we can know the general directionality but can't make very strong conclusions about where the dog will be, right? I don't know that this supports the idea that uncertainty will only be around the edges; I've heard NDT talk about the opposite: even a small shift in in the average causes relatively large changes in the probability of (previous) long-tail events.
A more accurate drill down on the analogy would be that we don't know the exact length of the dog' leash either, but we do know what the limits of it are. We know it's not a 1 km leash, but the "uncertainty around the edges" encompasses not knowing if it's 2 m or 0.5 m.
Part of the accuracy of the analogy is that just like when you see the human from a long distance, you don't know exactly how long the leash is, but you do have a solid expectation of the rough upper bound for the dog's distance from the human even still.
With climate change you're talking about having seen the human walk half way down a block already. You can see they're continuing to walk the same direction, and everyone agrees they'll continue walking to the end of the block if nothing changes. Do we know exactly where the dog is during that whole walk? No, but we do know it will cross a line two thirds of the way down the block sometime before the human reaches the end of the block. And the likelihood of the dog crossing is greater and greater the further along the block the human is.
>but we do know it will cross a line two thirds of the way down the block sometime before the human reaches the end of the block.
I'm trying to be careful here because I don't want to come across as if I'm against investing efforts to mitigate climate change. But in the real world, it ultimately comes down to trade-offs. So I agree with the premise that eventually, if unabated, climate change will likely cause "bad" outcomes. Apropos to this thread, though, is the definition of "bad". And from what other commenters have shared, there is large uncertainty of both the "low" and "high" temperature change models out to about 2070 or so.
In other words, even if the circumstances meet the definition for a "high" temperature change it may result in the same outcome as if it were a "low" temperature change, but the mitigation efforts would likely be drastically different. In a world where the solution is trade-offs, mitigating for one scenario over the other comes with real consequences. The point I'm trying to get at here is that I think uncertainty has to be a larger part of the conversation when we're talking about competing risks, especially when that uncertainty is relatively high. While uncertainty is a central part of a scientific discussion, my hesitancy about even bringing this up is that I also recognize that highlighting uncertainty is also a tactic of bad-faith actors to undermine any reasonable conclusions.
Because of containers, my company now can roll out deployments using well defined CI/CD scripts, where we can control installations to force usage of pass-through caches (GCP artifact registry). So it actually has that data you're talking about, but instead of living in one person's head it's stored in a database and accessable to everyone via an API.
Tried that. The devs revolted and said the whole point of containers was to escape the tyranny of ops. Management sided with them, so it's the wild west there.
Huh. I actually can understand devs not wanting to need permission to install libraries/versions, but with a pull-through cache there's no restrictions save for security vulnerabilities.
I think it actually winds up speeding up ci/cd docker builds, too.
Agreed that the argument "overprotection leads to lack of free speech in academia" is tenuous.
That said, I do wonder if we all _are_ being protected from opposing views these days. Like, we come across the opposing views but usually in a filtered / characature form on Social Media/Fox News/MSNBC. It's actually kinda hard to find stuff without spin in my experience.
EDIT
My hypothesis is that's it's just cheaper to create speculative / opinion based journalism rather than real investigation. Since the former gets enough clicks, there's not a strong financial reason to create good journalism.
It's always been hard to find things without spin. The only difference now is the range of different spin styles available. Every niche worldview has an online community.
The article mentions Clarence Thomas has been courted (even bribed?) by the very people paying the lawyers trying to overturn Chevron. I think it's naive to believe the judges don't have policy preferences that are strongly reflected in their rulings... If that wasn't the case the GOP wouldn't have blocked nominations from Obama to get their preferred judges in.
For sure, the President and Congress try to get Justices who agree with them. But you only have to go back to Anthony Kennedy and David Souter to realize that once they're on the Court, the Justices don't seem to ever feel beholden to the party or President that appointed them. George H. W. Bush appointed Souter, who ended up as one of the most reliably liberal Justices on the Court. Trump has been consistently smacked down by the very Justices he appointed.
And as sketchy as some of Thomas's dealings look, he's one of nine. Assuming for the sake of argument that he IS bought and paid for, you still need at least four other people to sign on to anything he says for it to be a ruling.
If you had an efficient market, wouldn't a competitor charge just a little less than others who are using surge pricing? Isn't the whole efficient market thing about how eventually prices should reach the cost of production, which would not be affected by surge demand?
> If you had an efficient market, wouldn't a competitor charge just a little less than others who are using surge pricing?
Absolutely. If everyone raised their prices of umbrellas due to rain, I'd lower my prices a bit to get those customers. I'd probably wind up making more overall too. Hopefully it will also lead to increasing return customers due to goodwill. If everything works out, not only would I sell more umbrellas, but I'd make more money in the future from returning customers. Win for me. And I feel like that's why we _don't_ have surge pricing on everything already.
This is great statistics, but it avoids the problem that moving to online polling has made it very difficult to get representative populations, so the data itself is biased in ways that cannot be counteracted by methods (only by assumptions, priors, etc). Which makes this misleading because it gives the forecast air of confidence that is unjustified.
True, I suppose my disagreement is that I believe it doesn't go far enough to explain how big of a deal it is, and how there _aren't_ ways to deal with it without substantial, subjective intervention from the forecasters.
I've worked on weighting code for online polls, they literally rely on dozens of hand picked decisions to stay "reasonable". These decisions aren't factored into the error bars, making them appear smaller.
And as far as the fundamental style predictions, how can you use a single GDP number when Fox tells its viewers one number and MSNBC tells its viewers another?
This article does describe a faithful statistical effort, but to me it doesn't emphasize the risk of a "black swan" event enough.
Absolutely. But we’ve been past the “golden age” of polling using live callers on landlines for more than 20 years now. We now have a reasonable corpus of polling data that we can use to evaluate how good pollsters are at making the corrections (often educated guesses) that they use to adjust their polls.
The justification for Bayesian inference, that the posterior will eventually converge to the true distribution, breaks down unless your prior has support for a good approximation to the data-generating model. So without a good model of how polling results map to actual voter distributions, the Economist model is guesswork, to a large extent.
I've worked as a data scientist for political campaigns (but no longer). The elephant in the room is that there are so many forces that make this election different from historical ones, from polls moving online away from phones (making them much much less reliable) to Trump's conviction to a completely unprecedented/fractured media landscape. Even if these forecasts were accurate, their usefulness for anyone other than people spending ad money is absolutely zero. Now that these forecast have so so much bigger error bars, I think public forecasts are actually harmful and just a scummy money grab from the media that posts them.
EDIT:
Why are they harmful? At best they make lots of people angry about something that hasn't happened. At worst they skew the election by making people not vote, or afterward serve to justify election fraud complaints ("my candidate wasn't supposed to lose based on forecasts, so fraud occured").
I think we all need to take a deep breath, accept we have little understanding what will happen, _vote_, and hope for a sane outcome.
Totally agree. Journalists should be explaining the stakes, not trying to predict the outcome. Predicting the outcome of an election is mostly a fool’s game.