I'm not part of the Java world at all, but I'm starting to think that Azul Systems is one of the few groups of people who know what they're doing with regard to Linux performance and the user/kernel boundary. I recently watched some talks by Cliff Click[1] that were extremely informative, and from what I understand about their proposed kernel patches, they seem like important improvements.
This report by Gil Tene (their CTO according to Wikipedia) lends more support to that theory.
I'm skeptical of the way the author framed the discussion around his patches, I believe the majority of people out there are genuinely not evil, people don't often reject contributions that have clear benefits without reason, and I find it's often best to get said reason straight from the horses mouth.
Does anyone have a link to that discussion? I get the sense there is more to that discussion than the author let on in his brief response.
I got the sense too. The message sharing the link to the source describes it as "incomplete and extremely buggy". Was it that way when it was first proposed to the kernel group for integration? If so that seems like a reasonable reason to reject it...
I googled for "kernel mailing list managed runtime initiative" and found this: https://lwn.net/Articles/392307/ Apparently it was never even proposed on the kernel mailing list! But then apparently the code as written wasn't ever meant to be integrated into the kernel upstream but as a sort of PoC that would maybe be a better starting point than going fresh for eventual integration... But there's lots of interesting details, and as I'm reading through the comments, things are coming back, now I remember reading about this in 2010/2011...
They absolutely do! If you purchase their Zing JVM, it is generally because of the very low GC pauses it brings. And after that occurs, you quickly start to notice what other things are going on on your system to cause unwanted pauses in your processes.
In order to effectively sell the JVM, they need to help you understand and reduce the non-GC pauses so you can realize the full value of your investment.
I've been trying to track down randomly latency (> 1000ms) in the network stack between the socket buffer and the client for about 3 months now... All of the stack traces showed the app was stuck in futex_wait, but since it looked identical to an idle server, I'd convinced myself epoll_wait was at fault... All the sudden I'm wondering otherwise. We're not on Haswell though; it's not clear to me if the bug would affect other processors or not - can it?
Mostly, I'm confused as to how this has only bitten people on Haswell - did pre-Haswell just enforce a MB invisibly there for some reason, or did Haswell explicitly change some semantics?
Also, an interesting note is that the commit references this deadlocking on ARM64, so I'm guessing this probably broke on non-x86 architectures in strange ways unless I'm really missing something...
You have to remember that the absence of a barrier does not automatically causes concurrency failures in a fail-fast manner. The barriers just provide guarantees. Your code might end up working (either always or in 99.999999% of all observed operations) just by chance. So it might simply be more visible on haswell than on other systems because it behaves differently or is more aggressive about exploting non-barriered operations.
From the mailing list:
> In our case it's reproducing on 10 core haswells only which are different than 8 cores (dual vs single ring bus and more cache coherency options). It's probably a probability matter. [...]
> Pinning the JVM to a single cpu reduces the probability of occurrence drastically (from a few times a day to weeks) so I'm guessing latency distributions may have an effect.
I'm wondering the same thing. The one thing I know that changed in Haswell is that the transactional memory instructions were found to be broken, but I assume those aren't the issue here...
> For some reason, people seem to not have noticed this or raised the alarm. We certainly haven't seen much "INSTALL PATCHES NOW" fear mongering. And we really need it, so I'm hoping this posting will start a panic.
Should we also alert the President? Maybe OP is only talking regarding to the ml he has posted on but we're out of context here on HN? Only affected systems in production seems to be RHEL 6.6 on Haswell.
Ubuntu 14.04/Debian 8: have the fix for a long time [0] [1]
Ubuntu 12.04/Debian 7: was never affected [3] [2]. Newer enablement stack kernels for Ubuntu has the same fix as [1].
RHEL 7: OP only talks about 6.6 so I assume either it doesn't have the regression backported to it or it already has the fix
Thanks, I've corrected the links, please check them again to see they are pre-regression versions [1]
> Not true, it is architecture independent (commit also mentions Android)
I saw it (and arm64 comment in this thread) but I didn't include them because I don't think it would be a production-serious issue there. Thanks for clarification.
> Please check your facts next time.
Thanks to your post I've corrected links, but no need for such an aggressive tone, huh? Maybe you should try to be more polite in your next refutal tries?
Fix commit and OP claims bug was introduced in [1], which has been never backported to 3.2.x, so I assume lack of default case was not a problem before that commit.
> And your links are using atomic_inc().. What that means with regards to this bug? I don't know.
It means they are simply old versions and they were never affected by the bug, if we believe the commit message of the fix commit and OP.
Or not recognized.. But you are right, i suppose that the atomic_inc() doesn't need a memory barrier because it's atomic and the new function may need one, guessing.
Where are you seeing this backported to 5.11 or 2.6.18-404? I'm pretty sure the changeset that introduced the problem
has not been backported to RHEL5 kernels.
I was only looking for the missing switch case in that function (which was what has been fixed by Linus). My conclusion that this missing case is showing the bug was probably not right.
My eyes jumped straight to "%$^!" and I wondered what a shell expansion had to do with a futex_wait bug. Then I briefly tried to parse it and only then read the sentence. I wondered how many others this happened to.
It would be nice to know which distros are affected by this bug. Particularly, i had unexplainable JVM lockups on RHEL 5.11 this week after it was upgraded.
Seeing that 6.6 and 5.11 both were released in 2014 with a 2.6.x kernel, i can imagine this bug also applying to RHEL5..
Also, does anyone know if there is some RHEL errata about this bug?
edit: I just looked at the RedHat applied patches for RHEL5.11 linux 2.6.18-398 and this bug was also introduced in the RHEL5.11 series (not sure if a subsequent kernel version fixes this)
Can anyone recommend a way to check a linux server for whether it's running on a Haswell CPU ?
I'm guessing perhaps checking /proc/cpuinfo for the XEON version v3, or looking for flags 'hle|rtm|tsx' would work - but something more definitive would help with mass-auditing.
Based on the Linux kernel range of 3.14 to 3.18 inclusive, and this[1] list of Ubuntu kernel versions, I believe only 14.10 (Utopic Unicorn) would even be affected.
You can find this bug backported to many 2.6.x kernels, so don't rely on the 3.14 version number.. See my other comment regarding Ubuntu and RHEL https://news.ycombinator.com/item?id=9544272
But i didn't check if there are updated kernels for those versions in Ubuntu.. Atleast for RHEL5.11 it looks to me that the -404 kernel is the latest...
This is why programmability is important. This is why being able to achieve performance of relaxed memory models with a more intuitive SC memory model should be a top objective for Intel and architecture researchers..
This is a race triggered by a particular reording of memory accesses as seen by different cores. It's the kind of thing that doesn't necessarily show up in a unit test anyway.
Exactly. This kind of bugs require stress testing to reveal. You might need to run it for minutes, hours or days to get it reproduced. And this bug might not come up with the hardware you are running.
Code dealing with memory barriers in SMP systems is non trivial to write, review and test. Everything is hardware specific, timing dependent and non-deterministic. Simple unit tests are useless for this kind of tasks, it needs stress testing on different hardware and a variety of workloads.
Reminds me of a passage from Tracy Kidders "Soul of a new Machine" [1]. The guys at Data General were implement one of the first pipelined 32bit processors for a mini computer in the late 70's. In the book Kidder talks about how they had a gate level simulator implemented in software that allowed them trouble shoot timing issues. Makes me wonder if a similar simulator could be useful to test and/or debug these types of issues.
Great book, highly recommend it. It won the Pulitzer.
In case you're not trolling: not really, not officially.
Unpopular features on less common architectures are frequently broken for large stretches of time, and go unnoticed until someone complains. Open source really exemplifies the squeaky wheel getting the grease, which is kind of sad.
Places where Linux is popular undoubtedly have their own internal private test suites, especially for features less popular on bleeding edge kernels (eg S390 arch support or Infiniband)
It would be hard to get any sort of good coverage with unit tests, too, but that shouldn't be a reason to avoid trying.
> It would be hard to get any sort of good coverage with unit tests, too, but that shouldn't be a reason to avoid trying.
Could a large but spotty unit test suite inspire false confidence that led to be being less careful about signing off on changes and thus decrease overall quality?
It is difficult to design meaningful unit tests for preemptively multitasking, protected memory operating system kernels. I don't know that it's actually impossible, but difficult.
While there are many advantages to unit testing, kernels are typically tested from userspace.
Some tests are laughably simple. I panicked the OS X kernel a while back with a shell script that repeatedly loaded and unloaded my kernel extension. Only a minute or two was required for the panic.
Apple fixed the panic but never told me how they screwed up.
EDIT: Of significant concern is how the kernel deals with the electrical circuitry. While the kernel is implemented in software, the reason we even have kernels, is so that end-user code doesn't have to understand much about physics.
AMCC - since acquired by LSI - sold some high-end RAID Host Bus Adapters. We had quite a significant problem with motherboard support. We had to test our cards on a whole bunch of different motherboards as well as PCI expansion chassis.
One might protest that "PCI is a standard!" but what we have is what we can buy at Microcenter. :-/
While not all of the kernel is concerned with physical hardware, much of it is. It's not really possible to write unit tests for the parts that have to deal with edge cases in electrical circuitry.
Looks to be more than just Haswell. I was wondering this too, and just noticed this comment on the (fix) patch: "the problem (user space deadlocks) can be seen with
Android bionic's mutex implementation on an arm64 multi-cluster system."
The problem is not that there is not an enforced default case, but that it was not in the coding standards. I always have a default case, but this is in my coding standards.
The commit message says the code was reviewed quite thoroughly. So I think a break-only default case where the other two cases had an /* implied MB */ comment would likely have been noticed. So in this particular case, there is a fair chance that such a warning would have helped.
This report by Gil Tene (their CTO according to Wikipedia) lends more support to that theory.
[1] https://www.youtube.com/watch?v=uL2D3qzHtqY