Why is the US apparently responsibile for this financially in the first place? For the future, will other countries join financially to up our global defenses against future outbreaks?
It sounds like the US took up a global cause on its own and when the clock ran out they didn't renew because that emergency had subsided.
Is this what was being pinned as Trump's "fault" in recent press and social media?
The US used to pay for this because the downsides of not doing it are higher than the cost. Also, initiatives like these increase the US’s political and economic power overseas.
"Trump’s budgets have proposed cuts to public health, only to be overruled by Congress, where there’s strong bipartisan support for agencies such as the CDC and NIH. Instead, financing has [bold]increased[/bold].
Indeed, the money that government disease detectives first tapped to fight the latest outbreak was a congressional fund created for health emergencies.
Some public health experts say a bigger concern than White House budgets is the steady erosion of a CDC grant program for state and local public health emergency preparedness — the front lines in detecting and battling new disease. But that decline was set in motion by a congressional budget measure that predates Trump."
https://www.snopes.com/fact-check/trump-cut-cdc-budget/
The blame game is beyond getting old...it's moldy.
I'm at a loss how my good faith reply gets immediately downvoted away. The game has broken our collective ability to make sense of anything. The wrong scent gets pushed away without any authentic attempt to cohere.
Why is it so hard to think longer term? What's good for the world is often great for the United States. Catching and quarantining this in time would have saved us from lots of deaths and a almost certain recession coming up this year.
While this may be true, and perhaps even for the dollar the United States benefits more than the cost (while the rest of the world gets a bit of a free ride). But as a taxpayer, I'd rather the US spend that dollar elsewhere, even if the highest ROI is in spending on world pandemic prevention.
That's to prevent the current situation where in game theory terms, we're being the sucker and so it's in everyone else's best interest to leech. We need to reach a new nash equilibrium where everyone is contributing together (which is eventually more optimal for everyone involved, despite the short term hit).
If for every tit dollar you spend on outer prevention you can stop some tat percentage of disease spread both inwards and outwards, and then that has a provable impact on your own economy, I'd say playing the geopolitical sugar daddy is no selfless act neither in ther short term nor the long term.
I understand the need for assurance that those dollars are effectively working towards the intended goal, but assuming you are a sucker for paying the bill is only right if there was nothing of nutritional value to you in the menu.
So I guess the question is about working the data and crunching the numbers to know whether there is or isn't impact on your own economy and the welfare of your population, and not about not feeling like a sucker on the supranational ego scene.
Word! The exorbitant privilege of issuing the world's reserve currency has a price, you got to keep the pump primed.
This setup used to work ok, it probably peaked in 1999 or so.
Low level optimization (assembly) is mostly only useful with algorithms which have instruction level parallelism. Modern compilers are incredibly good at optimizing serial algorithms.
Making use of compile time branch reduction can be massive. For example template parameters which are evaluated at compile time skipping lots of work at runtime.
It's crazy to think, if human civilization were to shut down the earth would fairly rapidly become much better off. Other species would flourish again, plants would slowly take over our cities. There'd be less bickering. Hm.
“Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans.”
The current push of environmentalism is sticking because it is clear that the current level of comfort is unsustainable on our current path. Our habitat is facing an existential crisis. I don’t think its noble to ignore the well-being of other lifeforms, but I think it is inaccurate to say that we are pushing environmentalism for their benefit.
Until the sun turned into a red giant, and all life on earth, potentially in the universe, was extinguished forever. Which is kinda sad for all those species and the universe overall.
Indeed. Before that happens there is also the supervolcano in Oregon and probably dozens of huge rocks on the scale similar to what happened 65m years ago. But let’s not be pessimistic! Think of the children.
Consider if your trading algorithm simply searched the history for that sliding window of data and then presented the following data as it's "prediction" -- it's utterly useless. This is the function of a compressor rather than a predictor.
I'm in a similar boat, being rather uneducated here. But I thought it wasn't completely unusual for countries to prod each other's air space. It seems really dangerous for Iran to just auto fire at an encroaching aircraft, potentially starting off a war. In that sense it may be lucky it wasn't a US fighter.
Apparently not so easily recognizable considering that the Russians shot down a Korean Air flight in 1983, the Americans shot down an Iranian flight in 1988, MH17, and now this.
All of these little details vary dramatically depending on the exact CPU and workload. I've developed a wide variety of scheduling strategies and have used neural networks to predict when a given strategy will be better. Scheduling is giant non deterministic mess with no ideal answers.
Ok but my data disagrees with you. Specifically: when apps get complex enough, the differences between CPUs and schedulers wash out in the chaos.
I’ve tested this over the course of a decade on multiple Linuxes, multiple Darwins, multiple x86 flavors (Intel, amd, and lots of core counts and topologies), POWER, and various arms, and on many large benchmarks in two very different languages (Java and C/C++). In Java I tested in in two very different VMs (JikesRVM and FijiVM). I think the key is that a typical benchmark for me is >million lines of code with very heterogenous and chaotic locking behavior stemming from
the fact that there are hundreds (at least) of different hot critical sections of varying lengths and subtle relationships between them. So you get a law of large numbers or maybe wisdom of the masses kind of “averaging” of differences between CPUs and schedulers.
I’d love to see some contradictory data on similarly big stuff. But if you’re just saying that some benchmark that had a very homogenous lock behavior (like ~one hot critical section in the code that always runs for a predictable amount of time and never blocks on weird OS stuff) experiences wild differences between CPUs and schedulers then sure. But that just means there are no ideal answers for that scenario, not that there aren’t ideal answers for anyone.
It isn't /fundamentally/ conservative, it is just typically programmed to choose the most conservative (highest probability) predictions. You could integrate a liberal aspect by fuzzing the decision process to choose from lower probability predictions.
More creativity, and ability to escape local minima, but at some cost when dealing with 'typical' cases and when making particularly damaging mispredictions.
I think the point is rather that you can't get a more useful prediction by choosing a lower probability description unless you have AGI. Only an AGI could tell that you're not in the mood for "Hey" to be followed by "darling", and only a superhuman AGI could realistically compensate for human bias in data sets.
Without AGI there are still cases when the lower probability prediction will be better, and will lead to escaping a local minima. I'd argue that the potential benefits of calibrating that axis dynamically exist with or without AGI.
Most anything you do, in general, will fork off into many branches. Some being required dependencies and many others being tangential improvements or generalizations.
You can either note them in some way as they emerge, or ignore them and keep your code tidy of such notes.
It sounds like the US took up a global cause on its own and when the clock ran out they didn't renew because that emergency had subsided.
Is this what was being pinned as Trump's "fault" in recent press and social media?