the tool they used is basically a nanoscale version of a 3D printer that uses UV epoxy. Essentially using a laser focused through a high numerical aperture microscope to focus light and cure the epoxy (there's some nonlinear action here but it isn't that important).
The key point here being that it is a serial process where a single spot is rastered over the sample, unlike conventional optical lithography which is an area illuminating parallel process.
Aren't there additive manufacturing 3d printers that project a hologram to cause the fluidic printing medium to cure the layer of material drawn out of the solution?
the key difference is in the function of the reticle if we want to do it in bulk analogous to normal lithography
in normal 2D lithography, you're typically only concerned with exposing area on a relatively flat film. This allows you to essentially 'block' the areas you don't want to expose and flood the areas you do want to expose. So you make your reticle with your pattern on it and you just shine the active wavelength (lets say 400 nm) through it and you get an exposed pattern and the areas not exposed.
in this volume lithography, you still have a relative thin flat film, but the pattern is no longer confined to a 2D surface, but in general is a 3D volume. So the reticle now needs to project a 3D pattern (essentially a hologram) instead of a 2D pattern. This is in general going to be quite difficult, and in some cases physically impossible if we're illuminating with the active wavelength.
eg. imagine a pattern like this where we are exposing from the top
---------------------
++++++++++++
--++----++--------
where --- is unexposed and ++ is exposed. There is basically no way we can get the active light to the 2nd layer without at least partially exposing the first layer.
The way the nanoscribe and similar systems goes about this is to use nonlinear effects. If I shine very intense 800 nm light onto a medium with a special optical property called a third order nonlinearity, I can sometimes get the medium to accept 2 800 nm photons as a substitute for a 400 nm photon (energy conservation is obeyed here since an 800 nm photon has half the energy of a 400 nm photon).
The crux here is that it needs to be very intense 800 nm light, so it needs to be concentrated. It's much harder to get intense light to expose an entire 3d hologram than it is to focus it into a single spot and raster it around.
Of course I'm not saying it will never be possible, but it is a significant effort.
unlikely that there is good precedent for a private weather organization. The gold standard for western weather prediction is the European Centre for Medium-Range Weather Forecasts and that is a collaboration between multiple government agencies.
> unlikely that there is good precedent for a private weather organization.
When has something being a bad idea or never having been tried stopped political apparatchik's from trying, especially given past appointee's opinions:
> The document describes NOAA as a primary component "of the climate change alarm industry" and said it "should be broken up and downsized."
[…]
> Project 2025 would not outright end the National Weather Service. It says the agency "should focus on its data-gathering services," and "should fully commercialize its forecasting operations."
> It said that "commercialization of weather technologies should be prioritized to ensure that taxpayer dollars are invested in the most cost-efficient technologies for high quality research and weather data." Investing in commercial partners will increase competition, Project 2025 said.
The "lets privatize essential parts of the government" folks do not care about precedent, or what is a good idea. They are driven by ideology and self dealing.
even Cliff himself (somewhat right leaning in today's American political environment) did not go so far in his blog. Cliff often speaks very highly of the European weather forecasts, and in his blog post it seems pretty clear that he is using it as the model for most of the suggestions.
Cliff Mass has this kind of weird position on climate change where here fully agrees that it is real and a serious concern, while simultaneously questioning the extent to which it is human-caused and disputing specific weather events as the negative effects of climate change.
I'm curious about where we draw the line on questioning scientific consensus.
Take Cliff Mass - he's a professor of Atmospheric Sciences at UW whose research focuses on weather modeling, climate systems, and atmospheric dynamics. When someone with his expertise raises questions about climate science conclusions, should we approach this differently than when non-experts do so?
This raises an interesting question about the role of expertise and scientific discourse: Is there a meaningful distinction between how we treat challenges to consensus from qualified researchers in the field versus those from outside it? Or should acceptance of consensus apply equally regardless of one's credentials?
As an imperfect analogy, Stephen Wolfram is a fairly successful theoretical physicist by academic standards with considerable expertise, yet his theories on fundamental physics and its interpretations are considered fringe in the physics community because he most of what he spins is untestable conjecture.
Individuals who disagree with the scientific consensus aren't always wrong, but most of the time they are. What separates these people is the amount of evidence they can bring to the table.
He doesnt challenge scientific consensus. What he does challenge is people making any claim they want then adding "because climate change" at the end which has become a problem. One side will deny climate change and the other side starting taking any claim at face value as long as you say "because climate change"
The problem is that his well-meaning statements are going to be used as ammo to privatize NOAA, much like someone's well-meaning statements were used to create turmoil in Springfield, Missouri.
It's possible it's in a superposition of states. Like if someone stupidly misremembered which Springfield, then it would be in many states at once until observed.
In addition to looking simply at the explosive growth limited to the past 4-5 years, it seems reasonable to look at the entire plot as well. All 3 graphs in [1] date back to 1965, and look at that data it seems that the growth/population of corals was reasonably stable up until the late 00s.
edit: only goes back to 1985, error on my part, but still relatively stable from 1985-late 00s
It may be possible that this recent spike in growth indicates that there is a strong recovery, but looking at the data it also seems inarguable the ecosystem has become highly unstable as well over the past 20 years, with drastic increases and decreases in all 3 populations that are much greater variance than from the mid 60s to early 00s.
While I agree with you that it's possible that this coral death is overhyped, your own conclusions don't seem to be evidence-based themselves.
> It may be possible that this recent spike in growth indicates that there is a strong recovery, but looking at the data it also seems inarguable the ecosystem has become highly unstable as well over the past 20 years
Could you expand on how the recent explosive growth is unstable and fits with the predictions that the coral is dying?
And GPs post was evidence based, your interpretations were not, they were subjective and a bit handwaving, which is why I would like clarfication on your above view.
Explosive growth is generally speaking not stable. I will defer to the dictionary definition of stable. I'm just using words how they are defined.
From an ecosystem perspective, simple "coral coverage" is not a good enough metric to indicate that the coral reef is healthy. We also need to understand the biodiversity, is one coral taking over because it is better adapted and creating a monoculture? It's not clear. I made a point to not jump to conclusions, simply stating that explosive growth does not indicate a healthy ecosystem.
Thank you for the clarification. Some may interpret "not stable" meaning that it could be detrimental to it's life, not just the textbook velocity definition.
Could you answer the second part of my question:
> "How does explosive growth fit with the predictions that the coral is dying?"
To your statement:
> "We also need to understand the biodiversity, is one coral taking over because it is better adapted and creating a monoculture"
I'd say that we should not assume one way or another. It very may well be, or not. That data is probably out there and I'd be interested to know the answer. If the answer is yes, it's very healthy.
I'd also like to know what species these scientists are breeding. If it's only a select few species, is that healthier than the species that have adapted naturally and are growing fine?
Explosive growth does not happen in a vacuum. Sometimes it's due to a pre-existing boom and bust cycle (eg lynx and hares, rabbits and wolves, etc). Sometimes it's due to the elimination of key species or ecological instability (eg hares in australia with no predators, boom of sea urchins in california, cuttlefish booming due to overfishing), or maybe it's something completely different altogether.
Given that there is no history of a pre-existing boom and bust cycle in the data, I think we can rule that out. So clearly something is happening that wasn't happening before. The question you want answered is one-dimensional and ill-posed in this situation to allow you to better understand the dynamics at work.
The GBR is not that old, and it experienced a major boom during it's creation, obviously.
We really don't have enough data to rule anything out as we weren't tracking it that long relatively to say for sure. We also don't know all the variables.
They tried the "but what about biodiversity" argument back in 2020 when the regrowth started.
It is automatically true that in an ecosystem that experiences a population decline, when the population starts growing again the subtypes that grow fastest will become more common than those which grow slower, and thus there will be a temporary decline in biodiversity. That doesn't indicate a problem of any kind. It's just the natural state of affairs.
The graphs date back to 1985. The font is small though, so it's easy to misread.
> it also seems inarguable the ecosystem has become highly unstable as well over the past 20 years
That's extremely arguable! Extrapolating from a tiny number of data points is how AIMS managed to screw up previously. There's really very little data on which to build predictions about long term trends. We have no idea how stable or unstable coral sizes are because it's just not been monitored for long enough to say. We're zoomed in to a tiny, tiny part of the coral's history, which is far longer than humanities own.
this definitely isn't the case in my experience. I run Civ VI on a relatively high end desktop (5950x + 3080 Ti) and there is a very noticeable slowdown between turns with lots of Civs/city states on large maps
With zero knowledge of how it works, I would also expect that each tick is some trivial calculations to determine yield per square for each city (plains square starts at +1 * 3 workers * 1.2 improvement modifiers) and combat resolution. Deterministic calculations that should complete instantly.
I guess I'm not sure what you mean by game logic then. Unless you're playing multiplayer civ (which I think is pretty niche even for civ standards), the AI logic is kinda central to the game.
But the AI isn't even particularly sophisticated. They needed to make it cheat at higher levels to remain difficult. Sounds like a severe lack of optimization to me.
Yeah i don’t really see how the game math for something like Civ couldn’t be run on like, a calculator. It’s fully deterministic with a very low number (relative to many actual numeric computing problems) of inputs/outputs
sounds like the game will be perfectly playable on mid-tier hardware, and you only need better hardware for "ultra" 4K. Bit of a poor clickbait headline by Tom's which has unfortunately become the norm :(
I'm happy that there are some higher fidelity graphics that will be available for those that can use it
I think this view is a bit narrow in terms of what "AI" advancements may depend on. I think it's very easy to argue that large scale AI adoption will require orders of magnitude higher bandwidth than what we currently have. It's not clear that long term electronics will win in all applications, especially with the strong resurgence in interest of photonic computing. Fundamentally, photonic platforms have much higher potential bandwidth (at the cost of power and size currently) than electronics.
GaAs (and other III-V) would likely be an essential material for some kind of photonic or hybrid compute system.
The response below addressed the quantum sensors, but I would be careful of calling "everything" quantum such as image sensors. Sure they rely on the photoelectric effect which is quantum, but not really in the sense of what we would consider a 'quantum sensor' today.
I suspect what could be more relevant are III-V based SQUID Qubits. These are highly sensitive systems that multiple nations are exploring for submarine detection. More near term, quantum communication via quantum light sources also can leverage a III-V platform.
sure, it's totally possible that the advantages of photonics or optoelectronics could win out, and iii–v semiconductors are pretty important for optoelectronics, though not for pure photonic systems like second-harmonic generation. sometimes people even use gaas for that, especially historically
what are iii–v based squid qubits? google scholar is not helpful except for finding https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResea.... i thought a squid was a josephson junction device made out of superconductors and insulators, not semiconductors. gaas isn't a superconductor, is it?
this doesn't sound like a quantum communication and squid research lab though. it sounds like a 50-year-old radar chip fab that's being put on life support as a pork barrel project
brain fart on my end, you're definitely correct that SQUIDS are not something demonstrated quite yet, I should have said Josephson junction, but even that seems more niche than I had thought when I wrote the comment.
your comment oscillates between incorrect and incoherent. squids have been demonstrated for decades (i didn't assert they hadn't been) and are made of josephson junctions, whose nicheness is not at issue in this discussion. i hope you get better because you clearly were not well when you wrote this
Seems like it was previously owned by Coherent, like some kind of III-V (specifically GaAs mentioned) photonics processes there in the past. This kind of technology is typically quite useful for lasers, LEDs, or potentially image sensors as well. Many LIDAR sensors and even light sources can notably depend on III-V semiconductor sensors. Also widely used by the telecom industry.
Outside photonics definitely useful for high speed electronics, but that would probably take more process development to get going.
That being said, most of Ragusea's takes haven't aged all that well, some by his own admission.