Heavy particles and gaseous emissions are not comparable in such a simplistic way. If you take a dump on the street it doesn't mean you caused 50 million times more emissions than the EPA limits for ICE car exhaust.
For example, iron from brakes is heavy but ecologically pretty harmless. OTOH NO₂ weighs almost nothing, but is toxic. You can eat 30mg of iron per day to stay healthy (just don't lick it off the asphalt directly), but a similar amount of NO₂ would be lethal.
Heavy particles don't stay in the air for long, and don't get easily absorbed into organisms. OTOH gaseous emissions and small particulates from combustion can linger in the air, and can get absorbed into the lungs and the bloodsteam.
Yeah, but brakes are not not made from pure iron and you won't have atomic erosion. Silly argument, really. Notoriously, you could still find brake pads with asbestos not too long ago. Pretty much any fine dust is very unhealthy to inhale, but brakes and tires are made from material mixes you really don't want to breath in. Even the "inert" fraction we find as microplastics in everything, the rain, fish and newborn, and we're only beginning to understand their biological reactivity and long term health consequences.
the break pad and tire particles in question are not so large they precipitate immediately. They aren't iron but rather real/synthetic rubber and other organics. There is research on them being bad for human health.
The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.
Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.
When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.
Moore's law is withering away due to physical limitations. Energy prices go up because of the end of fossil fuels and rising climate change costs. Furthermore the global supply chain is under attack by rising geopolitical tension.
Depending on US tariffs and how the Taiwan situation plays out and many other risks, it might be that compute will get MORE expensive in the future.
While there is room for optimization on the generative AI front we are still have not even reached the point were generative AI is actually good at programming. We have promising toys but for real productivity we need orders of magnitude bigger models. Just look how ChatGPT 4.5 is barely economically viable already with its price per token.
Sure if humanity survives long enough to widely employ fusion energy, it might become practical and cheap again but that will be a long and rocky road.
LLMs on GPUs have a lot of computational inefficiencies and untapped parallelism. GPUs have been designed for more diverse workloads with much smaller working sets. LLM inference is ridiculously DRAM-bound. We currently have 10×-200× too much compute available compared to the DRAM bandwidth required. Even without improvements in transistors we can get more efficient hardware for LLMs.
The way we use LLMs is also primitive and inefficient. RAG is a hack, and in most LLM architectures the RAM cost grows quadratically with the context length, in a workload that is already DRAM-bound, on a hardware that already doesn't have enough RAM.
> Depending on US tariffs […] end of fossil fuels […] global supply chain
It does look pretty bleak for the US.
OTOH China is rolling out more than a gigawatt of renewables a day, has the largest and fastest growing HVDC grid, a dominant position in battery and solar production, and all the supply chains. With the US going back to mercantilism and isolationism, China is going to have Taiwan too.
Costs for a given amount of intelligence as measured by various benchmarks etc has been falling by 4-8x per year for a couple years, largely from smarter models from better training at a given size. I think there's still a decent amount of headroom there, and as others have mentioned dedicated inference chips are likely to be significantly cheaper than running inference on GPUs. I would expect to see Gemini Pro 2.5 levels of capability in models that cost <$1/Mtok by late next year or plausibly sooner.
I think there’s a huge amount of inefficiency all the way through the software stack due to decades of cheap energy and rapidly improving hardware. I would expect with hardware and energy constraints that we will need to look for deeper optimisations in software.
It could fail if the generated C code triggered Undefined Behavior.
For example, signed overflow is UB in C, but defined in Rust. Generated code can't simply use the + operator.
C has type-based alias analysis that makes some type casts illegal. Rust handles alias analysis through borrowing, so it's more forgiving about type casts.
Rust has an UnsafeCell wrapper type for hacks that break the safe memory model and would be UB otherwise. C doesn't have such thing, so only uses of UnsafeCell that are already allowed by C are safe.
I have workarounds for all "simple" cases of UB in C(this is partially what the talk is about). The test code is running with `-fsantize=undefined`, and triggers no UB checks.
There are also escape hatches for strict aliasing in the C standard - mainly using memcpy for all memory operations.
That's not the same, and not what pornel is talking about. The x86 ADD instruction has a well-defined behavior on overflow, and i32 + i32 in Rust will usually be translated to an ADD instruction, same as int + int in C. But a C compiler is allowed to assume that a signed addition operation will never overflow (the dreaded Undefined Behavior), while a Rust compiler must not make that assumption. This means that i32 + i32 must not be translated to int + int.
For example, a C compiler is allowed to optimize the expression a+1<a to false (if a is signed), but a Rust compiler isn't allowed to do this.
The interop is already great via PyO3, except when people want to build the Rust part from source, but are grumpy about having to install the Rust compiler.
This hack is a Rust compiler back-end. Backends get platform-specific instructions as an input, so non-trivial generated C code won't be portable. Users will need to either get pre-generated platform-specific source, or install the Rust compiler and this back-end to generate one themselves.
They are grumpy about having to install the Rust compiler for a good reason. You can’t compile for Rust on Windows without using MSVC via Visual Studio Build Tools, which has a restrictive license.
> When targeting the MSVC ABI, Rust additionally requires an installation of Visual Studio so rustc can use its linker and libraries.
> When targeting the GNU ABI, no additional software is strictly required for basic use. However, many library crates will not be able to compile until the full MSYS2 with MinGW has been installed.
...
> Since the MSVC ABI provides the best interoperation with other Windows software it is recommended for most purposes. The GNU toolchain is always available, even if you don’t use it by default.
USSR once fucked up its agriculture and purged biological research, because Stalin thought survival of the fittest didn't fit his political narrative, and banned genetics for being an imperial ideology:
Internally the compiler tracks what "niches" exist in types, like minimum and maximum valid values, and takes the unused values for enum tags.
One thing it can't use is padding in structs, because references to individual fields must remain valid, and they don't guarantee that padding will be preserved.
> We were pretty "soft" (dislike the term) on the land borders and on immigration
That's absolutely not the impression US gives to the people outside. The visa system is soft only on one specific demographic it deems worthy (educated, wealthy, commonwealth citizen), and treats everyone else with contempt at best.
Getting a visa requires extensive background checks, and an in-person interview in a part of a consulate built like a prison. The green card lottery keeps people on uncertain ground for decades, during which they are second-class citizens and can have their entire life uprooted at any moment.
Claiming asylum via safe modes of transport has been made technically impossible (anyone suspected of seeking asylum won't be sold any tickets, won't be let through any security), and deadly dangerous and intentionally cruel towards everyone desperate enough to try anyway.
Unfortunately, many countries are like that, and the USA is not better than average even on the good side.
How did you manage to somehow twist the green card lottery into something so harmful sounding? It undermines your other points.
Nobody is forcing people to join the lottery. Nobody is in “limbo” against their will. it’s something that has a huge upside if you win, and little downside (apart from the Application fees).
Which country doesn’t do some kind of background check or information gathering when letting you in (except for tourism of course).
It’s extremely onerous to get a tourist visa to the EU if you’re an Indian citizen for example. Or it used to be.
They want your bank balances to make sure you aren’t going to just get on welfare.
Markdown is semantically less expressive than man's markup. man goes into details like explicitly marking up command's arguments.
You can convert man to Markdown easily using full capabilities of Markdown, but Markdown lacks semantic information to be converted back to fully-features man document.
If we had good man viewers that have consistent level of support, then man would be a better data source format.
Unfortunately, there's no way to know what features man viewers support, and AFAIK no good ways to provide graceful fallbacks. This is most broken with tables and links that have dedicated markup, but are not rendered in common setups.
> Markdown lacks semantic information to be converted back to fully-features man document.
Core Markdown, yes. Pandoc has a fenced div extension that can capture the semantic meaning.
::: Warning :::
This is a warning.
::: Danger
This is a warning within a warning.
:::
:::
Not suggesting that Markdown is the right tool, only that it has extensions to capture semantic meaning. Here are example documents produced using Markdown:
But the various man source formats aren't semantic markup of any particular use. Yeah, you can ensure that all of your arguments are typeset as italic or something, but it's not consistent because there are different source macro packages and they do different things.
If there was useful semantic markup, it'd be great, because you could imagine, for example, tools allowing you to generate commands on the user's behalf. But the man source formats are mostly, at core, typesetting for 70's typesetting machines, not intended for modern reference documentation.
mdoc(7), which is the primary macro set used on BSD systems, is a 90s language (as opposed to a 70s language) with useful semantics that support both hyperlinking and featureful search, both on the web and in the terminal.
But the biggest value of manpages to me is somewhat independent of the underlying format: its quality as documentation. BSD systems have a strong tradition of cohesive usability, including good manpages. A lot of the manpage alternatives I see advertised in Linux circles (such as tldr pages or bro pages) are of little use to me, because OpenBSD manuals are thoughtfully written, clear, concise, complete, and have useful examples. The difference is very noticeable when I try reading manpages for programs I’ve installed from packages, which are often incredibly sparse or incredibly verbose, and lack examples in both cases.
The nifty features that come from the modern language and tooling used by BSD manpages are really just symptomatic of the overall care that BSD communities put into their documentation generally. I wish it were more widespread in the free software world.
From European perspective, Russia's latest invasion was a wake-up call that the peace isn't permanent, and diplomacy and economic power may not be sufficient to keep it.
Effectiveness of drones and advanced missiles in the war was also a wake-up call that semiconductors and batteries have a strategic military importance, and the rest of the world is quite dependent on China and Taiwan for these.
I assume that all the talk about trade deficits and unfair competition from Chinese EVs is bullshit, and the US and the EU are having an "oh shit" moment realising they're unprepared for the world where wars are fought with drones and robots.
No, I am not suggesting this. And to my knowledge neither did Russia at any point suggest this. If Russia did, kindly share. The probability that Russia will invade Europe is propaganda propagated by western media. Request you to watch the 2 videos I have linked above.
You are, in fact, either suggesting that or are entirely unaware that Ukraine had a peaceful agreement with Russia prior. The entire point of giving Ukraine official sovereignty was to reduce international pressure on Russia - everyone knew that reneging it would bring war to Russia's doorstep, it's written in black and white on the Budapest memorandum.
You can blame Russia for not knowing their worth, but they did break the treaty and there is no rational justification for doing so.
But that is what you are insinuating about Ukraine. We should not have let the Ukrainian people self determine their path and instead given them the cold shoulder and left them to Russia.
For example, iron from brakes is heavy but ecologically pretty harmless. OTOH NO₂ weighs almost nothing, but is toxic. You can eat 30mg of iron per day to stay healthy (just don't lick it off the asphalt directly), but a similar amount of NO₂ would be lethal.
Heavy particles don't stay in the air for long, and don't get easily absorbed into organisms. OTOH gaseous emissions and small particulates from combustion can linger in the air, and can get absorbed into the lungs and the bloodsteam.
reply