Not the poster you've asked, but I think you might be interested in Mark Fisher's "What is Hauntology?" (10.1525/fq.2012.66.1.16). It argues the contemporary culture is incapable of coming up with genuinely new ideas because postmodernism and late capitalism constraint our imagination to the point where we can no longer imagine a wholly different system of politics and values. We're left with the upkeep of an already established system, and this is reflected in how the present crop of films and music mostly sample and rehash what's been done in the past century.
As a personal addendum, I feel this can be (partly) attributed to the loss of the Cold War's ideological struggle that drove the West to innovate, not just in technology but in societal structures and freedoms as well. This is why it can feel as if we've arrived at "the end of history", the current system has won, so what is left to seek or prove?
Re your addendum and "the end of history": I think it would be myopic for people to think that the current system has won, and that there's nothing left to seek or prove. The current system has brought plenty to many but is destroying our planet, and there's plenty of space for fresh thinking. China is taking the lead in innovation [1]; so perhaps there's a new ideological and existential struggle, just with the US as the underdogs. Hopefully people see this as motivating rather than depressing.
Even the Chinese Character Heroes competition mentioned in the article seems a lot like the spelling bee in the US, doesn't it? I wonder if the anecdote about the PhD students has a cultural dimension in addition to language proficiency — could the students have refused not because they don't know the characters, but because they aren't fully confident they wouldn't make a mistake?
This review is focused on Linux application performance, where Arrow Lake is generally strong. Compared to its predecessor, 14900K, it ends up both faster (barring some outliers) and significantly more power efficient. Compared to Zen 5, however, it is not as impressive: AMD's 9950X comes out ahead in both raw performance and perf-per-watt, especially in workloads that use AVX-512 like CPU inference. Still, 285K does have some wins in tests that can take advantage of its DDR5-8000 memory support, in some single-thread benchmarks like PyBench, and notably in code compilation.
In Windows reviews [1], 285K's performance is even worse, particularly in gaming tests where it is slower than 14900K and 7800X3D (and with AMD soon launching 9800X3D, the gap should become even bigger). Just like Zen 5 on its launch, Arrow Lake seems to suffer from scheduling issues. As the linked review notes, "When pairing Windows 24H2 with Arrow Lake, performance will be terrible—we've seen games running at 50% the FPS vs 23H2". So there's some hope for improvement with future updates, but overall the Windows scheduler looks beyond suboptimal for modern CPUs with complex topology.
The official specifications are more conservative than what is possible with memory overclock profiles like Intel XMP and AMD EXPO. The Phoenix tests show 285K at both DDR5-6400 and DDR5-8000. It is possible to go higher than the official 5600 MT/s with Zen 5 as well, but there is less headroom, with 6400 MT/s being the limit according to TPU [1].
What problems have you encountered on Fedora that were caused by a distro upgrade? Asking this because I've been using Fedora for years across different machines and can't recall any breakages that were direct consequences of an upgrade, though I usually apply it a month or two after the release date, so maybe there are early kinks that get resolved by that time.
Just to offer a different point of view, I see it as the opposite. I like a lot of things the KDE community is doing and I think it's particularly good at power user oriented apps like Krita and Kdenlive, which may be the best open-source tools in their respective areas and which don't really fit in the modern Gnome framework. As a desktop environment, however, I feel KDE is too visually chaotic to be usable. This post [1] illustrates some problems, but the lack of design cohesion permeates KDE and cannot be fixed without a long concerted effort. I imagine it's never been a priority because most users can shrug these inconsistencies off as something inconsequential, but for me (and I don't believe I'm alone in this) they're instantly noticeable and distracting eyesores.
Gnome has its own problems, but it is very visually consistent and clean, especially as of late when most of the standard apps are moved to GTK4/libadwaita. The GP's comparison of KDE being closer in spirit to Windows while Gnome to Mac is spot on IMO.
I have to use Windows at work, and one thing that hugely improves the experience is the "Everything" search tool [1]. It searches across all files you have in something like a second or two, and when you type it narrows down suggestions as you'd expect instead of randomly bringing up something completely unrelated. I even use it to launch programs I don't have pinned to the taskbar (it will find both .exes and .lnk shortcuts with readable names).
Compared to that, search
in the start menu (or Windows Explorer for that matter) is so comically bad it makes me weep. Before I knew about Everything, I could maybe believe there is something about NTFS or Windows security or whatever that makes it impossible to do fast quality search across the filesystem in modern Windows. But no, it's clearly possible, and it's such a shame that Microsoft is incapable of doing that in its own OS.
Windows search is so bad that you can type "NOTE" and it'll be seconds, on a freaking supercomputer, before "Notepad" appears. This is insane. The list of applications on my computer is in the very low hundreds total, and the number of users with literally hundreds of thousands or millions of apps is very, very low. It can and should be in the search bar's RAM at all times and a linear search should be orders of magnitude below my human perception speed to say nothing of better algorithms.
So, OK, sure, the alpha version couldn't do that, and the beta version couldn't do that, and by golly, launching apps quickly didn't make the release list... sure. But why hasn't this obvious optimization ever risen to the top of the feature list in the last several years?
The obvious answer is that nobody in Microsoft is empowered to care about the experience as a whole anymore, and it shows.
The slightly less obvious answer is that I bet Microsoft management has simply written off Windows now. It's not Cloud enough and too hard to make services- and subscription-based even if they put their best efforts in. I think they're going to discover that it was more foundational to their business than they realized.
Text-based workflow has one significant advantage over GUI-based design: diffability. This enables patch-based collaboration: you can easily share your diff with others, review changes line-by-line, resolve conflicts between concurrent modifications, etc.
Personally, I'm much more comfortable working on large documents in LaTeX compared to Word because I can see every change I make and easily revise/revert it. It's too easy to unintentionally hit some shortcut or button in a WYSIWYG editor that subtly changes the document and not realize it until much later, when the undo stack is useless.
Word can show you every change you made, as well as tools like Confluence (which is just a version-controlled WYSIWYG Wiki), and let you diff/revert individual changes.
GUI design diffing is pretty simple too: you collapse a flowchart into a DAG and then diff the changes between two copies of the DAG. It's how you troubleshoot DAG-based software bugs. We could also just make a better way to diff GUI changes, if we tried. It's not nuclear physics...
If people started using GUIs more, then they'd find new problems, sure, but then they'd just make solutions for them. There isn't a problem we can't solve. Except for the problem of changing a culture, like a culture of text. Culture is the hardest thing in the universe to change.
I've written quite a bit of powershell to automate stuff in Word in our docs (pdf generation, hyperlink creation and verification, etc), and I second this. It is awful to work with. I'd much rather have latex, markdown or anything else than WYSIWYG.
It's worth noting that AMD had worked with Microsoft on Zen-specific optimizations in the past, too. Ahead of the Zen 2 launch, they touted improvements of up to 15% from Windows scheduler changes for CPU topology awareness [1]. And with Zen 5 having higher cross-CCD communication latency than the previous generations [2], it probably gets an even harsher penalty from poor scheduling.
While the less-than-stellar reviews for the 9950X are already out, AMD's recent strategy of staggering releases for desktop parts (first the regular, then the X3D SKUs) can improve this generation's perception, assuming there are meaningful performance gains they can reach in software for Zen 5 prior to the X3D release.
In many of the benchmarks linked in the OP and in my comment, particularly gaming benchmarks, the high-end SKUs show results that are significantly worse than would be expected considering the raw performance of the CPU cores and the performance of lower-end or previous-generation chips on the same benchmarks. The problem appears to be due to the Windows 11 scheduler doing an especially poor job of deciding how to schedule threads onto the processor's 2 dies. This generation has particularly high latency between the dies, so if an application does a lot of inter-thread communication and Windows spreads those threads across cores on different dies, application performance suffers significantly.
It's probably something that will be fixed soon with software updates, and Linux fares much better. But the result is that launch-day benchmarks are much worse than they "should" be.
The 9950X seems more exciting than the last week's 9700X/9600X. It is comfortably ahead of the previous gen (including X3D) in code compilation and video/image processing, which I care about more than performance in games, and it's also in a class of its own in workloads heavy on AVX-512, though they might be a bit niche.
I think the TDP on the 9700X and 9600X may have been set a bit too low (in fact, there are indications it will be raised in a future BIOS update [1]), which led to a relatively cool reception from reviewers focused on raw performance. When looking at performance-per-watt in Phoronix tests, 9700X and 9600X often fare better than the bigger chips with higher TDP, but for desktops I guess efficiency is just not that big of a concern.
> and it's also in a class of its own in workloads heavy on AVX-512, though they might be a bit niche.
It'll be interesting to see if it remains niche - I do a fair bit of work on graphics rendering (some games, some not) and there's quite a bit in avx512 that interests me - even ignoring the wider register width. A lot of pretty common algorithms we use can be expressed a fair bit easier and simpler using some of those features.
Previous implementations either weren't available on consumer platforms, or had issues where they would downclock/limit ALU calculation width for some time after an avx512 instruction was run, only returning to full speed after a significant time - presumably when whatever power delivery issues could settle - which seriously affected what use cases in which it made sense. It wasn't worth it to have "small data set" users of avx512, as it would actually run slower than the equivalent avx2 code due to this. And the size of "large enough" data sets was pretty close to where it'll be better to schedule a task on the GPU anyway....
But AMD's implementation doesn't seem to have this problem - so this opens up the instruction set to much more use cases than previous implementations.
Or has the AVX512 ship already sailed? With Intel apparently being unable to fix these issues and started hacking it into even smaller bits? I mean, arguably they should have started with that - the register width is probably the least interesting part to me, but at some point having it actually widely adopted might be more useful than a "possibly better" version that no chip actually supports.
I agree. I work in a similar field, and the value of AVX512 is clearly there - it just hasn't been worth implementing for the tiny percentage of market penetration. This is directly due to the market segmentation strategy Intel applied. AMD has raised the ante for AVX512 with two excellent implementations in a row, and for the first time ever I'm definitely considering building AVX512 targets.
Just as a small example from current code, the much more powerful AVX512 byte-granular two register source shuffles (vpermt2b) are very tempting for hashing/lookup table code, turning a current perf bottleneck into something that doesn't even show up in the profiler. And according to (http://www.numberworld.org/blogs/2024_8_7_zen5_avx512_teardo...) Zen5 has not one but _TWO_ of them, at a throughput quadrupling Intel's best effort..
Indeed, the difference does appear to be in how AMD does the throttling.
From the linked numberworld blog:
> Thus on Zen4 and Zen5, there is no drawback to "sprinkling" small amounts of AVX512 into otherwise scalar code. They will not throttle the way that Intel does.
This is exactly the use case I'm talking about - relatively small chunks of avx512-using code spread throughout the codebase. Larger chunks of work tend to be worth passing over to the GPU already.
A TDP of 170 Watt is quite the beast. I don't think that it's reasonable to claim this isn't a major concern just because it's a desktop processor. This reflects in operational cost and anything directly and indirectly related to cooling, which means case size and noise.
If your typical workloads are covered by Phoronix tests, take a look at their energy consumption results. In LLVM compilation, for instance, 9950X does run at higher average power (188W vs 140W for 5950X), but because it finishes the task much faster, its energy consumption is actually lower at 58500 joules per run vs 78700 joules per run for 5950X, so it should be more efficient.
Glad to see I'm not the only one missing Google Inbox! I don't believe it was widely popular, at the time I may have been the only one in my circle to use it, but for me its workflow was just perfect. It gave you a clutter-free mailbox that surfaced the right things at the right time with very little manual fiddling, something I never managed to replicate with Gmail since.
The article might be overstating things, "hate" is strong word and not everyone outside the US shares the political reasons, but having a Google service you like end up axed is an experience more and more people can relate to. From Reader, to Inbox, to Play Music, to countless others, the company is oddly insistent on breaking things that just work and offering a subpar replacement (if one at all) that inevitably loses your data or requires manual transfer to fit how the other service is designed to work.
As a personal addendum, I feel this can be (partly) attributed to the loss of the Cold War's ideological struggle that drove the West to innovate, not just in technology but in societal structures and freedoms as well. This is why it can feel as if we've arrived at "the end of history", the current system has won, so what is left to seek or prove?