Hacker Newsnew | past | comments | ask | show | jobs | submit | more dmm's commentslogin

On a Mac, you can switch between apps with Command-Tab or windows of the same app with Command-` but there's no way to cycle between all windows or bounce between to two most recently used windows.

Maybe this used to make sense when apps were single purpose but I do basically everything in a web browser or a terminal so not being able to bounce between the previously selected window(of whatever kind), as I can with Alt-Tab on linux or windows, is frustrating.

Also Command-` switches to the next window, not the previous one like I would expect.

MacOS removed subpixel antialiasing, honestly for understandable reasons, making rendering on low-ppi displays blurry, but high-ppi displays are still super expensive. I got a 32" 4k monitor(~140ppi) at Costco for $250. A >200ppi display of the same size costs 20x that amount.


For web apps, spinning them into “installed” apps (doable in both Chrome and Safari now) is the move. This unclogs your tab bar, gets rid of the pointless persistent browser chrome, and gives you the benefit of OS task management capabilities.

You can add Shift to both Command-Tab and Command-` to move in the reverse direction.


Woah, I did not know about this installable web app feature - this is a game changer. Thanks for sharing.


Also I find the default Command-` to be unintuitive, especially on non-US keyboards (` is next to left Shift for me). I remapped Command-` to Option-Tab so you only have to move your thumb.


32" 6K monitor from ASUS costs $1400, 27" 5K Dahua monitor is $500, it's not $250, but we are slowly getting there ...


Not bad! Thanks for pointing those out.


Betterdisplay is $20 or so and solves the ppi problem for the most part.

It’s the dumbest thing apple has ever done and hats off to betterdisplay dev. Best money ever spent on a desktop tool easily.


The solution is subpar, even if it's nice to have one. What windows and linux have is hinting for text and good antialiasing on vector elements. They map these those the actual hardware pixels so you won't have wobbly lines.

These don't matter as much when you have high PPI. But they're a lifeline on low PPI displays (and there are a lot of those).


I completely agree, having gone through that frustration myself a couple years ago, but it at least makes the experience sort of good enough for my backend swe usage instead of making my eyes hurt. It’s still much better on other oses on the same display, absolutely.


Most macOS keyboard commands that let you cycle between things (like windows or applications) can be "reversed" by adding the SHIFT key.

So CMD+TAB+SHIFT cycles in the opposite order of from CMD+TAB, etc.


> I do basically everything in a web browser

Then you are deliberately handicapping yourself, this isn't something you can blame on the OS. It's like complaining that a car has bad fuel economy because you always stay in first gear.

As for the displays, you are comparing apples to oranges. You can get a high DPI monitor which is smaller than 32 inches for cheap. Which is plenty of screen for the distances where DPI differences are important.


Well, I don't do everything in the browser and the terminal. I also use my IntelliJ IDEs.

But other than that? Most macOS apps are now inferior to browser-based analogs. Calendar, contacts, email, iMessage, Music, TV - they all just suck.


My experience is just the opposite. I have never encountered a cloud app which is anywhere near the best paid apps in quality. What cloud app is better at photo editing than Affinity or Photoshop? What cloud calendar is better than BusyCal? What cloud spreadsheet is better than Excel? IDE and text editor? Etc.


>Then you are deliberately handicapping yourself, this isn't something you can blame on the OS.

The classic "You're holding it wrong" defense. Especially when the alternatives don't have this problem.


If you think that the purpose of OS X or Apple devices is to live in the web browser or live in the terminal, then you've been very misinformed. It's on the level of buying a motorcycle and expecting it to have a roof. And then complaining about the manufacturer. Apple stuff has worked like this for decades.


I've been following bachefs development for a while and supporting Kent on patreon so Bcachefs losing its mainline, supported status broke my heart a little. My home storage has always been under ZFS but being out-of-tree and requiring old kernels is increasingly annoying for me. It's basically the only reason I still use Debian, all my other systems are Fedora at this point.

Also, I hit one of the zfs native encryption send/receive bugs after it came out. It was clearly a rushed feature and made me question the future of zfs maintenance.

My plans have now changed to btrfs and I'm taking the opportunity to create a rock-solid and thoroughly tested backup/restore strategy. Wish me luck!


I've long been a ZFS stan, but I am slowing coming around to the idea that I probably won't be able to continue using it at even the small scales I have planned. It's a damned shame.


btrfs is fine for single disks or mirrors. In my experience, the main advantages of zfs over btrfs is that ZFS has production ready raid5/6 like parity modes and has much better performance for small sync writes, which are common for databases and hosting VM images.


> has much better performance for small sync writes

I spent some time researching this topic, and in all benchmarks I've seen and my personal tests btrfs is faster or much faster: https://www.reddit.com/r/zfs/comments/1i3yjpt/very_poor_perf...


Thanks for sharing! I just setup a fs benchmark system and I'll run your fio command so we can compare results. I have a question about your fio args though. I think "--ioengine=sync" and "--iodepth=16" are incompatible, in the sense that iodepth will only be 1.

"Note that increasing iodepth beyond 1 will not affect synchronous ioengines"[1]

Is there a reason you used that ioengine as opposed to, for example, "libaio" with a "--direct=1" flag?

[1] https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-...


Intuition is that majority of software uses standard sync FS api..


Context: I mostly dealt with RAID1 in a home NAS setup

A ZFS pool will remain available even in degraded mode, and correct me if I'm wrong but with BTRFS you mount the array through one of the volume that is part of the array and not the array itself.. so if that specific mounted volume happens to go down, the array becomes unavailable unmounted until you remount another available volume that is part of the array which isn't great for availability.

I thought about mitigating that by making an mdadm RAID1 formatted with BTRFS and mount the virtual volume instwad, but then you lose the ability to prevent bit rot, since BTRFS lose that visibility if it doesn't manage the array natively.


> with BTRFS you mount the array through one of the volume that is part of the array and not the array itself

I don't think btrfs has a concept of having only some subvolumes usable. Either you can mount the filesystem or you can't. What may have confused you is that you can mount a btrfs filesystem by referring to any individual block device that it uses, and the kernel will track down the others. But if the one device you have listed in /etc/fstab goes missing, you won't be able to mount the filesystem without fixing that issue. You can prevent the issue in the first place by identifying the filesystem by UUID instead of by an individual block device.


> I don't think btrfs has a concept of having only some subvolumes usable. Either you can mount the filesystem or you can't.

You can still mount the BTRFS array as degraded if you specify it during mount. But then this lead to some others issues like the missing data written while degraded will not be automatically be copied over without doing a scrub, while ZFS will resilver it automatically, etc

> You can prevent the issue in the first place by identifying the filesystem by UUID instead of by an individual block device.

I tried that, but all it does is select the first available block device during mount, so if that device goes down, the mount also goes down.


Reading "Bottle of Lies" by Katherine Eban, I'd argue that the collapse of the FDA was well underway before the current administration. The FDA was completely unable to regulate overseas drug manufacturers, resulting in many, many problems. Sincere attempts to inspect overseas drug makers with random inspections universally results in shutdowns, which cause politically unpopular drug shortages, making enforcement politically difficult.


That seems more like an "underfunded and underjurisdictioned" problem for a portion of what they do, rather than collapse of the agency.


I’m very familiar with this space, specifically parenteral manufacturing.

The real challenge lies in the expectations the FDA has set for manufacturing. Over time, the regulatory space has been heavily influenced by academic-driven theoretical scenarios for microbiological contamination. While well-intentioned, these theoretical risks often drive overly stringent requirements that don’t always reflect real-world manufacturing risks.

As a result, it’s becoming prohibitively expensive to manufacture drugs for the U.S., especially sterile injectables.

And truly it gets worse every year…


https://www.propublica.org/article/fda-drug-loophole-sun-pha...

> Digging through company records and test results, they found more evidence of quality problems, including how managers hadn’t properly investigated a series of complaints about foreign material, specks, spots and stains in tablets.

> Those unknowns have done little to slow the exemptions. In 2022, FDA inspectors described a “cascade of failure” at one of the Intas plants, finding workers had destroyed testing records, in one case pouring acid on some that had been stuffed in a trash bag. At the second Intas factory, inspectors said in their report that records were “routinely manipulated” to cover up the presence of particulate matter — which could include glass, fiber or other contaminants — in the company’s drugs.

> Sun Pharma’s transgressions were so egregious that the Food and Drug Administration imposed one of the government’s harshest penalties: banning the factory from exporting drugs to the United States.

> A secretive group inside the FDA gave the global manufacturer a special pass to continue shipping more than a dozen drugs to the United States even though they were made at the same substandard factory that the agency had officially sanctioned. [...] And the agency kept the exemptions largely hidden from the public and from Congress. Even others inside the FDA were unaware of the details.

FDA inspectors found actual, live contamination in drugs produced by a manufacturer, and the agency secretly (otherwise, it would have caused "some kind of frenzy" in the public") gave it an exemption anyway, to make sure supply wasn't impacted. This isn't a "funding" issue, and it's not a "regulations are too strict" issue. This is an issue with the people running the agency behaving completely inappropriately.


I think it can be both actually. The FDA through over regulation scared local manufacturing from generics which are generally low margin. Overtime you become dependent on Indian generics which have a horrible track record, this is a country that has massive lead contamination from spices and the government does nothing about it. Too late now the ship has sailed and you are now forced to utilize these. No doubt it’s a structural problem in the FDA but it can also be one where perhaps the stakes were kept too high for manufacturing in the US.


That is a problem of the government not inspecting imports and/or allowing them from places with known problems.

If the government had said the imports from India are not allowed due to insufficient quality controls, then the market price for the generics would increase in the US, maintaining the necessary profit margins for the manufacturers to provide higher quality medicine produced at higher cost.


This all seems entirely reasonable. This is a cost benefit calculation. If bad drugs kill 1 person, and drug shortage kills 100, what do you choose?

The FDA chose a practical middle ground. Ban what isn't critical, and for those that are, they put additional mitigations in place:

> Exempted drugs were sent to the United States in a “phased manner,” the company said, with third-party oversight and safety testing.

>“The odds of these drugs actually not being safe or effective is tiny because of the safeguards,” said one former FDA official involved in the exemptions who declined to be named because he still works in the industry and fears professional retribution. “Even though the facility sucks, it’s getting tested more often and it’s having independent eyes on it.”


Then they should have been transparent about it.


I probably agree with transparency, there is very little information on the ways in which the FDA was not transparent.

the article states "And the agency kept the exemptions largely hidden from the public and from Congress."

How so, are the examples?

The FDA maintains a public red list of companies with import bans, and a green list companies operating under exemptions.

What transparency are we talking about?


Older article from 2019:

https://www.npr.org/sections/health-shots/2019/05/12/7222165...

>Internal divisions and pressure from Congress also limited the FDA's response to overseas violations,

whistles

>delays in launching a generic version of Lipitor could cost Americans up to $18 million a day, according to a 2011 letter from a group of U.S. senators to the FDA commissioner.


It can be true that not every function of the FDA works as intended, while it still does provide functions that are crucial to American society and are being removed.


But bacteria are all natural!


It feels to me like the tyranny of small differences. The fact that the various watchdogs amplified such specific issues greatly overshadowed their support of the mission. From what I've read, the FDA is a backwater from a funding perspective, and yet a punching bag from a regulatory point-of-view.

  *He and his colleagues had also been engaged in a decades-long debate with a sprawling community of watchdogs — mostly doctors, lawyers and scientists from outside the agency — who were often broadly supportive of the agency’s mission but who fought with officials like Califf, sometimes bitterly, over the specifics: How should the F.D.A. be financed? What kind of evidence should new drugs and medical devices require? How should regulators weigh the concerns of industry against the needs of doctors, patients and consumers?*


Sooo that sounds like there's a whole lot of ways for it to get way, way, way worse.

The existence of problems does not imply there cannot be more plentiful, more diverse, and more severe problems in the near future.


If Chesterton's fence doesn't have a working latch, then it's appropriate to remove it entirely.


If Chesterton's fence is intangible and invisible, then it's appropriate to remove it entirely. If it doesn't have a working latch, it doesn't serve as a hard barrier, but it may still serve as a soft barrier, and that may be good enough.

Or, conversely, important things may have been relying on access via the latch-free fence gate: fixing the latch without providing a more appropriate solution to those issues could cause more harm than the benefit you get from "now the fence actually functions as a barrier". (Sure, the latch keeps the wolves out, and stops them picking off the sheep – but it also keeps the sheep away from their only freshwater source, without which most of the sheep are going to die.)


This doesn't really make sense, especially to the Chesterton's Fence parable. If it doesn't have a latch and you don't know why it doesn't have a latch...


We know for a fact (like actual empirical fact) that FDA prevents vast numbers of unsafe and ineffective drugs from reaching the market. This is absolutely indisputable.

So uhhh, maybe we think in reality instead of offloading to metaphor.


The whole “let’s paint a general principle with a broad brush over this highly nuanced thing I know nothing about” is a huge problem with discourse in our society.


> We know for a fact (like actual empirical fact) that FDA prevents vast numbers of unsafe and ineffective drugs from reaching the market. This is absolutely indisputable.

we also know for a fact that the FDA lets unsafe and ineffective drugs into the market, especially from overseas, slapped with a label that its safe according to the FDA. if its a crapshoot, what's the point really?


The point is that it's nowhere close to a crapshoot.

99.99999% of drugs ever created are unsafe or ineffective.

99%+ of the drugs you will ever encounter in your life are both safe and effective.

That is not a crapshoot. Obviously.


if you are in doubt, propublica has done a huge expose in the major, deliberate lapses of FDA regulation for foreign manufactured drugs

https://www.propublica.org/article/fda-drug-loophole-sun-pha...


How is that relevant to my point?

Despite its failures (however many you want to point out), our regulatory regime converts a pipeline of almost universally ineffective and/or dangerous compounds into a marketplace of almost universally safe and/or effective compounds. This is a fact.

Unless you think no one is trying to, would try to, or would deceive themselves into accidentally releasing a dangerous or ineffective compound to consumers?


Or fix the latch? Or was this a sarcastic comment?



I know what Chesterton's fence is, my question was specifically about why one would throw it away if the latch doesn't work.


The 2018 valsartan recall is a perfect example of this - an overseas manufacturer's nitrosamine contamination went undetected for years despite theoretical oversight, affecting millions of patients.


Chiron: 2004, the UK government shut down their flu vax plant (it was in the UK). It later came out that the FDA knew what was up and basically let it slide. It was one of the early ani-vax movements torches... Crunchy moms pissed about shots for kids and parents on Oxycodone were not happy with Pharma (or corporations in general: Enron etc..)

> politically unpopular drug shortages ...

Ask your ADHD friends about how they get their meds.

One side wants to keep it, the other side wants to get rid of it. No one wants to fix the problem.


> No one wants to fix the problem.

That’s not what wedge issues are for. They’re not meant to be solved, because then they’re used up, and there’s airtime to fill in the meantime.


That just means the FDA was restricted. The FDA is fine. The people funding the FDA are not.


It's interesting that several products from the 1920s contain measurable quantities of DEHP, which was apparently first synthesized in the 1930s. How did that happen?

For example, the cocoa powder from the 1920s https://www.plasticlist.org/product/990


That is interesting. I wonder if it's a byproduct of another process and the 1930s date was only when it was commercially produced in isolation.


Great article! Thanks for sharing your research into the history of these super interesting theaters and projection systems.

There is something I've wondered about though:

> While far from inexpensive, digital projection systems are now able to match the quality of Omnimax projection.

Are they really? The St Louis Science Center Omnimax was switched from the 70mm film system to "laser 4k" digital projection in 2019. I've only been to one show but it didn't seem particularly sharp, with large clearly visible pixels. It was very bright, with high contrast, though.

4k seems like a pretty low resolution for such a large screen?


This is definitely an area for debate. I've seen the physical resolution of a 70/15 film frame estimated at 70MP, which is obviously a lot more than the ~8MP of 4k. The MP comparisons between film and digital are a little iffy though, and digital ought to be sharper within the limitations of that resolution than film. Ultimately it comes down to marketing but, having not had a direct comparison, I would still expect 70mm to look better than a digital projection system.

I think that digital LED domes might beat film because of the excellent light output and color reproduction, but I guess I'll have to shell out for the Sphere to find out as there are very few of that size.


I did some scanning for Universal. Depending on how the image is framed, you can usually just squeeze 4K from 35mm. The 70mm I had I easily pulled 8K from and I'm pretty sure I could have gone to 10K.


I thought it was 8k rather than 4k that was generally considered to be roughly equivalent to IMAX?


> Do you think the _process_ of making music is the important part, or the final sounds that are produced?

Pharmaceuticals are regulated not just by testing the end product but by regulating the entire production process[1]. How this came to be has an interesting story but I believe that it's essentially the correct way to do things.

Saying that process is irrelevant, that only the end product matters is a very persuasive idea but the problem comes when it's time to validate the end product. Any test you can perform on a black box can only show the presence or absence of a specific problem, never the absence of problems.

“Program testing can be used to show the presence of bugs, but never to show their absence!” - Dijkstra

[1] If you're interested in how this regulatory structure has failed to regulate international drug manufacturers, check out "Bottle of Lies" by Katherine Eban.


I hope death is cured soon so the boomers can rule the world forever.


> intermittence is a problem, but I think we can deal with it by being cleverer

Solar power is great but intermittence is the main issue with it. If you look at 30 year historical weather data, many highly populated regions have two week periods with almost complete cloud cover. Storage and intercontinental power transmission are usually listed as the solutions to this, but the costs of these solutions are rarely included.


Solar can still generate up to 25% of their peak power with full cloud cover.


The issue is renewables are not a complete solution no matter how good it feelz


I was just injecting facts into the discussion without taking a side. I know that's confusing behavior.


people really take offense in how energy gets produced even...


> the costs of these solutions are rarely included.

Solar plus storage is included in all the major levelized cost reports, like from the NREL.


Not in any realistic sense. This report

https://www.eia.gov/analysis/studies/powerplants/capitalcost...

just mashes together a PV array with about an hour of storage and quotes a price for that which is low and is certainly not going to get you through the night.

So many things drive me nuts about that report and the discourse around it that, I think, contribute to people talking past each other. For instance, quoting one price for solar energy is nonsensical when the same solar panel is going to give much more energy in Arizona than it is in upstate New York. The cost of a solar + battery system is going to be different in different places. In upstate NY we deal with a lot of retailers that are based in places like Bentonville, AK who just can't believe you might need an electric space heater in late April or otherwise your chickens might die. Since 95% of the world's population lives in a milder climate it's no wonder our needs don't get taken seriously.

The intermittency problem involves: (1) diurnal variation (overnight), (2) seasonal variation (do you overbuild solar panels 3x so you have enough generation in the winter or do you invest in very long term storage?) and (3) Dunkelflaute conditions when you are unlucky and get a few bad weeks of weather.

I've seen analyses of the cost of a grid that consider just smoothing out one day, but not one that covers seasonal variation. (So much of it comes down to: "how many days of blackout a year can people tolerate?")

With a significant overbuild or weeks worth of storage capacity costs are not going to be so favorable against nuclear energy. The overbuild offers the possibility that you could do something useful with the extra power but it is easier said than done because "free" power from renewables is like a free puppy. You have to build power lines to transmit it, or batteries to store it, or you have to feed it into some machine whose capital costs are low enough that you're not going to worry about the economics of only running it 20% of the time. (Go tell a Chemical Engineer about your plan to run a chemical factory 20% of the time and that's probably the last time you'll hear from them.)


A case study for Denmark. Not even using the latest plummet in price of BESS.

https://www.sciencedirect.com/science/article/pii/S030626192...

Generally: Renewables and storage solve somewhere high in the 90s percent.

Then throw some gas turbines on it. Low CAPEX high OPEX. Just like we’ve done for the past decades with the previous ”base load and peaking” paradigm.

Those gas turbines will be a minuscule part of the total energy supply.

When it finally becomes the most pressing issue the gas turbines can trivially be fueled by green hydrogen, green hydrogen derivatives, biofuels or biogas from collecting food waste. If they are still needed.

Lets wait and see what aviation and shipping settles on before attempting to solve a future issue today.


I love how green hydrogen is assumed to become abundant and trivially easy to retrofit into existing infrastructure but fast neutron reactors are automatically considered infeasible by comparison.

Or that by far the easiest way to produce massive amounts hydrogen without emitting carbon into the atmosphere is… wait for it… nuclear power.


> Or that by far the easiest way to produce massive amounts hydrogen without emitting carbon into the atmosphere is… wait for it… nuclear power.

No, that isn't the easiest way.

The easiest — not best, easiest — way to produce massive amounts of hydrogen is whatever your electrical power source is plus some low corrosion rods in a river.

If you want the cheapest, well, in most cases PV is the cheapest source of electricity — there's variance, sometimes it's wind.

Nuclear is so expensive that it's the same range of prices as PV plus batteries. And when you're using the electricity to make hydrogen, with the hydrogen as the storage system, batteries are redundant.


Since PV needs batteries to be grid-useful (duck curve and all that), it's perfectly reasonable to have both.

And no, hydrogen as the storage system doesn't make batteries redundant. Law of conservation of energy. You are talking about using electricity to split water molecules, presumably more electricity to compress and store the collected hydrogen, and then you have the losses associated with converting back to electricity in a fuel cell or conversion to mechanical energy through combustion.

A square meter of PV provides a theoretical maximum of ~1KW at 100%. Even the experimental perovskite cells only get 45% of that. 450W/m^2. Whereas nuclear is measured in gigawatts per reactor with multiple reactors per plant.

Then a storm hits. Far less sunlight. Then something like hail hits. Damage to panels. Then there's the issue of security if someone wanted to cripple the grid.

Nuclear is 24/7, rain or shine, wind or no, impervious to even hurricanes, and already has a robust security and logistics apparatus around it.

I have PV panels on my home. I love the idea of decentralized power. But the hydrogen economy is pretty theoretical at this point. Hard to store for any length of time, comparatively low combustion energy, low energy density overall, etc. It may happen, but "may" is a bad bet for long term national policy. I'd rather push more toward electrified high speed trains than hydrogen.


> Since PV needs batteries to be grid-useful (duck curve and all that), it's perfectly reasonable to have both.

Needs storage*, what that storage is depends on other factors.

(* there's a "well technically" for just a grid, in that China makes enough aluminium they could build an actually useful global power grid with negligible resistance, but it doesn't matter in practice)

As it happens, I agree with one crucial part of your final paragraph — hydrogen is hard to store for any length of time (not sure you're right about comparatively low combustion energy but that doesn't matter, low energy density overall is accurate but I don't think matters).

I favour batteries for that because battery cars beat hydrogen cars, and the storage requirements for a power grid are smaller than the requirements for transport, so we can just use the big (and expanding) pile of existing factories to do this.

But hydrogen has other uses than power, and where it's an emergency extra storage system you don't necessarily need a huge efficiency. That said, because one of the main other uses of hydrogen is to make ammonia, I expect emergency backup power to be something which burns ammonia rather than hydrogen gas — not only is it much more stable and much easier to store, it's something you'd be stockpiling anyway because fertiliser isn't applied all year around anyway.

But you could do hydrogen, if you wanted. And some people probably will, because of this sort of thing.

> A square meter of PV provides a theoretical maximum of ~1KW at 100%. Even the experimental perovskite cells only get 45% of that. 450W/m^2. Whereas nuclear is measured in gigawatts per reactor with multiple reactors per plant.

This is completely irrelevant for countries that aren't tiny islands or independent cities.

Even then, and even with lower 20% efficient cells, and also adding in the capacity factor of 10% that's slightly worse than the current global average, Vatican City* has the capacity for 11.1 kW/capita: https://www.wolframalpha.com/input?i=0.5km%5E2+*+1kW%2Fm%5E2...

They are of course not going to tile their architecture in PV — there's a reason I wrote "that aren't … independent cities" — but this is a sense of scale.

(* Number 7 on the Wikipedia "List of countries and dependencies by population density": https://en.wikipedia.org/wiki/List_of_countries_and_dependen...)

> Then a storm hits. Far less sunlight.

That's what the storage is for

> Then something like hail hits. Damage to panels.

Panels are as strong as you want them to be for the weather you get locally. If you need bullet-proof (FSVO), you can put them behind a bullet-proof screen.

> Then there's the issue of security if someone wanted to cripple the grid.

The grid isn't the source; if you want to cripple a grid, doesn't matter if the source is nuclear, PV, coal, or hamster wheels.

> Nuclear is 24/7, rain or shine, wind or no, impervious to even hurricanes, and already has a robust security and logistics apparatus around it.

Really isn't 24/7, it's 70-80%: https://en.wikipedia.org/wiki/File:Worldwide_Nuclear_Power_C...

And mis-estimating the environmental risks is exactly what went wrong with Fukushima.


> And mis-estimating the environmental risks is exactly what went wrong with Fukushima.

It took a massive earthquake and tsunami to cause this, and the number of deaths/injuries due to the power plant is a rounding error compared to the earthquake and tsunami. Fukushima actually did most things right with the notable exception of not putting the backup generators on the roof. Had they put the generators on the roof, neither of us would have ever known the name "Fukushima".

When evaluating the Fukushima exclusion zone, compare it to the Exxon Valdez oil spill of 1989. In that case, we still haven't cleaned up all the oil, and up to 450 miles from the initial spill. By comparison you want to transition to ammonia as a fuel source, which you correctly note is easier to store long term than molecular hydrogen and far more energy dense. Sounds like a good deal since molecular nitrogen is incredibly abundant as well.

Now I want you to imagine there's an ammonia spill in the magnitude of Exxon Valdez. Long term, the ammonia would almost certainly dissipate faster than crude oil, but the immediate acute toxicity would be far worse. You're killing basically all sea life in the area, the fumes would take out most birds and even quite a few people. If the spill were on land, it could severely compromise the ability to grow crops in the region for a long time. And that's not in the face of a massive earthquake and tsunami, but inattentiveness on the part of a single ship's crew.

The point being that large scale energy production and storage will NEVER be fuzzy and completely safe. The most common metric is deaths per unit of electricity. If a power source is small, even one death can be unforgivable. For massive amounts of power, statistics matter.

https://ourworldindata.org/safest-sources-of-energy

Note that the nuclear stats include both Chernobyl and Fukushima. This is notable since Chernobyl was a worst case scenario with a flawed design that has never existed in Western commercial reactors precisely because it was so unacceptably dangerous: no containment vessel, graphite moderation, graphite fuel rod tips, lack of education for its staff, a culture of secrecy, etc.

In the meantime, nuclear has provided obscenely large amounts of electricity since its inception. I'm all for expanding solar and wind, but folks really need to understand the real enemy is fossil fuels: coal, oil, natural gas, etc. The single largest threat to our survival as a species isn't a multi-kilometer exclusion zone but a CO2-laden atmosphere that makes the entire equatorial zone uninhabitable, and that's precisely what we're looking at within a century.

The faster we can move off carbon-based fuels by any means necessary, the better. That includes nuclear. Excluding nuclear from the conversation out of hand is lunacy.


Hydrogen is already used in many industrial processes (~1e8 metric tons/year), including turbo-alternators, while there is not a single ready-to-be-built model of industrial breeder reactor.


Not only was a design ready to be built, it was built. Went online 39 years ago. Produced 1.2GWe at peak. Not only produced power on its own but reprocessed spent fuel from other nuclear reactors.

Decommissioned 28 years ago. Because it didn't work? No. Because it wasn't safe? No. Because it wasn't reliable? No, it had a 95% availability rate.

It was taken out of service due to political pressure and legal maneuvering, not technical reasons.

https://en.wikipedia.org/wiki/Superph%C3%A9nix


Facts: Superphénix was a prototype. It didn't reach its goal (reaching the industrial stage). Not a single model of breeder reactor reached it. Mentioning its high availability rate neglects planned shutdowns (planning enough of them improves it). Its load factor in 1996 (just before its shutdown), more relevant, was 0.31, thus well below the minimum viable for an industrial reactor. Some people consider the project to be a success, but no expert or its operator has ever said so (they proclaimed their confidence in their ability to achieve industrial operation by an unspecified date), and its successor, named "ASTRID", launched 12 years later, which was supposed to design and build a reactor for €5 billion, spent more than €700 million on studies alone before being put on hold, so "it worked, but everything has to be redesigned...".


Yes, hydrogen is clearly a much easier technology to make work than fast reactors. Why is this even a question? For example, fast reactors have the issue that in an accident, if fuel melts and rearranges, one can have potentially have a configuration that is prompt supercritical on fast neutrons. This is functionally an atomic bomb.

Also, even in a Fallout Future where everything is nuclear powered, hydrogen is still needed! Some 6% of today's global natural gas consumption goes to making hydrogen, and a good chunk of that is for ammonia synthesis, which is necessary to feed eight billion people.


The main hang ups for fast reactors in the US are: (1) our regulators are less sanguine about occupational safety for plutonium workers then the French and Russians (carcinogenic Pu nanoparticles —- the high energy ball mill can make sand deadly, just think what it can do for Pu) and (2) fear of nuclear proliferation if the “plutonium economy” expands. There is also (3) the economics will never be attractive with a steam turbine and all the heat exchangers that entails, but a power set like

https://www.swri.org/markets/energy-environment/power-genera...

could fit in the employee break room of the turbine house of an LWR and could make it competitive. It’s a big if though.


"Functionally an atomic bomb"?

Why do you speak on topics you obviously know so little about? Where did you get this nonsense?

Fast neutron designs aren't without their challenges, but causing an atomic explosion is not on that list. Hydrogen explosions? Possible. Steam explosions? Possible.

Atomic explosions? Not even theoretically can you get enough U-235 to clump together to do that without cancelling known basic laws of physics.

To build a bomb, you need a purity of 90%+ U-235. Nuclear power plants have what? 2%? 3%? Might even go as high as 5%? Might as well expect a pack of bubble gum to spontaneously explode.


The more detailed simulations have gotten the less bad a meltdown looks in a fast reactors. Usually some of the molten core flows away and no more critical mass. If it goes over critical there can be some energy release but over time it looks less and less and not a problem to contain.

Sodium has its problems (burns in carbon dioxide!) but the chemistry is favorable for a meltdown because the most dangerous fission products are iodine and cesium. The former reacts with the sodium to make a salt that dissolves in the sodium, the second alloys with the sodium. Either way they stay put and don’t go into the environment.


The problem is you need to ensure it's not bad in any possible configuration from an accident. This is hard to do. Will the energy release at criticality drive the material into an even more critical configuration? Such "autocatalytic" systems were considered for bomb design, but weren't chosen because of the large amounts of plutonium needed. But a fast reactor might have the plutonium of hundreds of atomic bombs.

Edward Teller famously warned about this is a nuclear industry trade publication in 1967.

The only fast reactors I'd trust would be ones with fuel dissolved in molten salt; it's hard to see how that could become concentrated in an accident that doesn't boil the salt. But such reactors have their own problems, in particular exposure of reactor structures to intense fast neutron fluxes (not as bad as in fusion reactors, but worse than LWRs.)


Increasing the heat past a certain threshold reduces the nuclear reactivity. Read up on "passive safety".

Teller may have warned about this in 1967, but nuclear technology hasn't been stagnant since 1967. Folks read his stuff and designed systems specifically to fail safe, not run away. Stop fear mongering based upon a 60-year-old supposition. Stop assuming everyone working in the nuclear industry is an idiot that hasn't thought about safety.


> Increasing the heat past a certain threshold reduces the nuclear reactivity. Read up on "passive safety".

The safety arguments for fast reactors are typically that a serious scenario will not occur, for example that fuel won't melt, not that if it does occur the results won't be bad. Do you trust that sort of argument? I don't.


Nice straw man you've constructed and burnt down.

Those are NOT the safety arguments used within the industry. For example in a molten salt reactor, the fuel is already melted! If it gets too hot, thermal expansion moves the radioactive isotopes further away from one another, reducing reactivity. If heat increases past a certain point, plugs at the bottom of the tanks will melt, allowing gravity to dump the fuel into multiple separated storage vessels sized to prevent further activity.

You do not know what you're talking about. You've read a bunch of fear mongering, and bought it. Do you really believe the entire industry of nuclear engineers and support staff are just blindly YOLOing their way through their jobs, damn the consequences?

I swear, you sound like the power production equivalent of antivaxers convinced the medical industry is trying to poison all of us.

"Passive safety" doesn't mean "stuff shouldn't go wrong."

https://en.wikipedia.org/wiki/Passive_nuclear_safety

It means "we're actively exploring everything that could go wrong and having the worst case scenarios fail to a safe state without requiring human intervention."

Those two positions couldn't be further apart.


So, how exactly do you claim conclusively that molten fuel (in a fast reactor with sold fuel elements) will not flow into a bad configuration? I don't see how one can possibly do that analysis. Teller didn't see how either.

I already said MSRs would be the one kind of fast reactor I could see the analysis work, so thank you for agreeing with me on that.

As a charitable act toward you I will ignore the rest of your comment.


> I don't see how one can possibly do that analysis.

Classic argument from ignorance fallacy. Because YOU cannot see how it could be done, no one has ever figured out how it could be done.


You didn't answer the question.

All the analyses I've seen for fast reactor safety are about avoiding fuel melting, not what happens if the fuel does melt. The variability in this latter scenario is so great that conclusive analysis ruling out disaster doesn't seem possible. Maybe you could explain the unexpected principle that enables one to do that? Or, lacking that, point me to a paper where such analysis has been performed.


That report you say has 1 hour of storage has four hours of battery in all the systems it compares.

It's a bit of a weird measure anyway, since it's just the ratio of storage to inverter, so it's the time it could run for when working flat out.

For your wider point, if anyone, anywhere was really contemplating a near full nuclear grid they'd have the exact same issues. Do you overbuild and curtail? Export? Store in batteries? The problems and solutions are incredibly similar now batteries have basically solved the daily variation for solar.

The fact that no one is even bothering to think that far ahead for nuclear is a recognition of how totally out of the race it is.


Cryptocurrency is mostly bullshit I think, but for whatever reason people keep buying it. That could be a nice endlessly-dispatchable economically rewarded (despite all reason) workload.


Yes, and people have been as clever as possible dealing with this issue. There is just no good way to solve it.


For what it's worth, when Missouri passed its voter id law they included funding to provide people free id, if needed for voting.

I was working the MO secretary of state office at this time, and though I wasn't directly involved with it, I did see them help a lot of people get ids. Most of the time it was just helping people fill out forms or paying a fee for them but there were a few complicated situations. One woman had fled a domestic violence situation in another state with just the clothes on her back. She had no identifying documents of any kind. The SoS ending up coordinating with two other states to get documents and hiring a lawyer to get a judge to reestablish her identity in court.

So in the case of Missouri at least, real resources were committed to get ids for anyone who asked for it.

> you need to plan a decade or more in advance and actively help people get the IDs.

That's probably true too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: