Always amuses me that it's current year and people think about turning off checks, even when they're pretty much free in modern* (since 1993 Pentium, which got like 80% accuracy with its primitive branch prediction?) CPUs...
"Around Easter 1961, a course on ALGOL 60 was offered … After the ALGOL course in Brighton, Roger Cook was driving me and my colleagues back to London when he suddenly asked, "Instead of designing a new language, why don't we just implement ALGOL60?" We all instantly agreed--in retrospect, a very lucky decision for me. But we knew we did not have the skill or experience at that time to implement the whole language, so I was commissioned to design a modest subset. In that design I adopted certain basic principles which I believe to be as valid today as they were then.
"(1) The first principle was security: The principle that every syntactically incorrect program should be rejected by the compiler and that every syntactically correct program should give a result or an error message that was predictable and comprehensible in terms of the source language program itself. Thus no core dumps should ever be necessary. It was logically impossible for any source language program to cause the computer to run wild, either at compile time or at run time. A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to -- they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
> I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.
Here we are, 42 years later, and bounds checks are still not the default in some languages. Because performance, or something. And our computers are literally 1000x as fast as they were in 1980. So instead of paying 2% in bounds checks and getting a merge 980x faster, we get 2-3x more CVEs, costing the economy billions upon billions of dollars a year.
Removing bounds checks is a stark example of a premature optimization.
You can remove bounds checks when you can prove that the index won't ever get out of bounds; this is possible in many cases, such as iteration with known bounds.
Isn't this a job for the compiler? The default would be to have boundary checking, but if the compiler can prove that the index is always in range it can drop the boundary check. From the user's perspective, the boundary check is always there. Most vector operations would have provable boundaries.
Edit:
Based on the benchmark code linked to in dahfizz 's comment, it seems that Rust does the above for vectors, but there are situations where the bounds can't be proved, so boundary checking can't be removed. How common is this case in practice?
To omit the check, the compiler would need to know that the loop range matches or subtends the array bound. That is commonly easy for built-in arrays, uncommonly for user-defined types. Most types are user-defined.
We trust the library author to get it right, despite (in Rust) wrapping accesses in "unsafe" or (in C++) not. Compilers are not particularly better at everything than library authors.
> would need to know that the loop range matches or subtends the array bound
Some compilers have pretty sophisticated analyses aimed at just that: determining affine relations to statically bound indexed accesses. Failing that, some compilers will resort to loop versioning, generating two versions of the loop and then partitioning the iteration space into the definitely-in-bounds range from possibly-out-of-bounds range, then selecting which portions of which loop to execute by prefixing both with dynamic checks. Couple all of that with peeling and unrolling, and bounds checks start disappearing and getting amortized away.
Unless libraries are receiving a copy of the meta representation of the program and running integer equality relations over the dataflow chains, then no, not really.
The library author has certain knowledge of what the library is meant to achieve, where the compiler is obliged to guess according to whatever tea leaves it can find to descry.
In particular, the library author knows that the container won't be changing size over the duration of the loop, something the compiler would have difficulty proving.
What's special about compilers, then? Compilers are code, and therefore, as you say, buggy.
Library authors know things about what their code is meant to be doing that compilers cannot deduce, so cannot act on. But the library author can. A library, according to how heavily it is used, benefits from more thorough testing than generic application code gets.
You're right that compilers tend to have bugs, but in practice, compiler bugs are rarely the cause of software issues. The same cannot be said of libraries. Major SSL/TLS libraries for instance tend to be written in C, and all of them have had memory-safety bugs.
> Library authors know things about what their code is meant to be doing that compilers cannot deduce, so cannot act on. But the library author can.
I don't see your point here.
> A library, according to how heavily it is used, benefits from more thorough testing than generic application code gets
This doesn't generalise. There's plenty of very widely used application-specific code, and there's plenty of little used library code. Also, widespread use does not imply a high level of scrutiny, even if we're talking only about Free and Open Source software.
Anyway, that's all a sidetrack. The benefits of memory-safe languages aren't up for debate, even for well-scrutinised codebases. We continue to suffer a stream of serious security vulnerabilities arising from memory-safety issues in code written in unsafe languages. The go-to example is Chromium, where 70% of serious security issues are due to memory safety. [0]
That is how the trope goes. But looking at the actual faults, we see them in bad old C-like code, so not interesting in the present context. Modern C++ code, for example, will always have exactly zero use-after-free bugs. OpenSSL is old C code, so of even less relevance here.
But the topic was not CVEs. It was optimization. An optimization opportunity that would be missed by a compiler can be explicitly spelled out for it by a library author. Whether the library author is obliged by the compiler to write "unsafe" next to it has exactly zero effect on the correctness of the code written in that place: you can easily write incorrect code there. If you don't, it was not because the compiler provided any help.
> Modern C++ code, for example, will always have exactly zero use-after-free bugs.
Not so. C++ is not a safe language, and never will be. Even if you avoid raw pointers you aren't safe from use-after-free bugs as the standard library makes them possible even then, with std::string_view (and perhaps other functionality). [0][1][2]
There is no safe subset of C++. People have tried, e.g. the MISRA folks, but they're unable to find a subset of C++ which is both safe and usable. The only way to guarantee the absence of undefined behaviour in a C++ codebase (or a C codebase) is to use formal analysis tools, which are a tremendous burden.
If it were possible to get decent safety guarantees out of C++, Mozilla wouldn't have bothered inventing a whole new language in the hope of improving Firefox.
I do agree though that modern C++ code is likely to have fewer memory-safety issues than 'C-style' C++ code.
> OpenSSL is old C code, so of even less relevance here.
It isn't irrelevant, our conversation wasn't specifically about the C++ language. I was responding to your suggestion that using well-known libraries written in unsafe languages, is a reliable way to avoid memory-safety issues. We know this isn't the case.
> An optimization opportunity that would be missed by a compiler can be explicitly spelled out for it by a library author.
Sure, but this whole thread is discussing that bounds checks are in practice generally inexpensive on modern hardware, except in cases like SIMD optimisations being precluded by the need for checks. I suspect this extends to other runtime safety checks too, but I don't have hard numbers to hand.
> An optimization opportunity that would be missed by a compiler can be explicitly spelled out for it by a library author.
Sure, that's an advantage of low-level languages. It doesn't negate the importance of memory-safety though.
Runtime checks are unlikely ever to have zero performance cost, sure, but the cost can be close to zero, and the fallout of removing checks from buggy code can be considerable.
> Whether the library author is obliged by the compiler to write "unsafe" next to it has exactly zero effect on the correctness of the code written in that place: you can easily write incorrect code there.
If it were a simple boolean matter of correct vs incorrect, then sure, but it often isn't. In practice, it can mean the difference between an exception being thrown, and undefined behaviour running riot, possibly leading to serious security issues.
> If you don't, it was not because the compiler provided any help.
Runtime checks are very helpful during development.
You keep trying to change the subject. But I have not promoted "subsetting" as a means to safety, and safety is anyway not interesting to real people. People want their programs to be useful. To be useful, a program must be correct, and every correct program is implicitly safe.
But the actual topic was not not that. The actual topic you have tried to steer away from is optimization. The point I made was that the author of a library can take up responsibilities that some people insist only the the language, via the compiler, can perform. The library author can perform optimizations the compiler fails to, and the library author can define interfaces that can only be used correctly, and safely. To the programmer using a library, it makes no difference, except that they may be unable to use some new, immature language, but can easily pick up and use a good library.
The norms there, from what I gather, are that you compile with runtime checks enabled unless you've used the SPARK prover tools to verify the absence of runtime errors, in which case you can safely disable runtime checks in your builds.
They’re nowhere near free. Branch prediction table has finite entries, instruction cache has finite size, autovectorizing is broken by bounds checks, inlining (the most important optimization) doesn’t trigger if functions are too big because of the added bounds checking code, etc. This is just not great benchmarking — no effort to control for noise.
For real programs, you should demand that the compiler hoist such checks out of the loop, which may then be vectorized the usual way.
If the compiler can't do that by itself, a library should do it.
The real issue is whether the information about the true size of the memory region involved is available at the point where it is needed. This may come down to how good the language is at capturing desired semantics in a library. Rust still has a long way to go to catch up with C++ on this axis, and C++ is not waiting around.
Rust claims responsibility for enforcing safety in the compiler, with libraries using "unsafe" to delegate some of that to themselves. Users then trust the compiler and libraries to get it right. In C++, the compiler provides base semantics while libraries take up the whole responsibility for safety. Users can trust libraries similarly as in Rust, to similar effect.
Modern C++ code typically does no visible operations with pointers at all, and most often does not index directly in arrays, preferring range notation, as in Rust, achieving correctness by construction. A correct program is implicitly a safe program.
> This may come down to how good the language is at capturing desired semantics in a library. Rust still has a long way to go to catch up with C++ on this axis, and C++ is not waiting around.
What catch up does Rust need to do?
Rust has slice that know the size of its data built in the language, while C++ doesn't. And Rust has stricter const and mutability rules that facilitates optimizations.
As for the implementation, Rust use LLVM which is also the backend used by one of the popular C++ compiler.
I am talking about language features that library authors can use to capture and express semantics in their libraries... but only if the language implements those features. C++ just has a lot more of them.
Like what, for example? To the contrary, I think that, other than constness, C++ has rather few facilities to communicate semantic invariants to the compiler.
If the thread you are on doesn't modify the variable (e.g. by const_cast), and that variable isn't atomic or volatile, the compiler should be allowed to treat it as invariant. Whether it does in practice probably depends on a lot of things though.
Yeah, but the compiler need to see through all the functions to find out that there is no const_cast and therefore makes const useless because it could as well see it is not modified.
Also it needs to be good at alias analysis.
Sometimes a __builtin_assert(c) may help (which is not the same as the normal assertion, which won't). Other times, you need to make a private copy of a value that the compiler could not otherwise assume will not be clobbered.
Unfortunely I only see Modern C++ on C++ conference talks and on my hobby projects.
Most of the stuff I see at work, is quite far from this ideal reality, starting with Android's codebase, or the various ways C++ gets used in Microsoft frameworks.
At the risk of moving the goalposts: so what? The vast majority of applications running out there would not be impacted meaningfully in the least by taking that performance hit.
Bounds checking should be the default, and then only when someone has proved through benchmarking and profiling that it's actually a problem for their application, should they even consider turning it off.
Bounds checks are the easiest type of code to branch predict. You just assume they never trigger, suddenly you have a 99.99% hit rate on them. When they trigger you don't care about the branch misprediction at all because the program is already busted and security is more important.
If your conclusion is “no signal, just noise” boost the input until the signal becomes apparent. If that means writing such a massive loop that the program takes an hour to run, fine.
I have removed signed-integer based values bounds checking in a compiler once and before I noticed, I got a nice 3.8% performance gain in a large diverse benchmarking suite. While not expensive, bounds checks are certainly not free.
And that's the thing, really. Sure, they're not free, but they're pretty cheap. In a project like OpenSSL, if C had bounds checking, it should be enabled, all the time. Sure, disable it for some random bit of custom high-perf software where you really need to eke out the last few percent of performance. But for 99% of everything else, leave the bounds checking turned on.
> Sure, disable it for some random bit of custom high-perf software where you really need to eke out the last few percent of performance.
And that's the rub, isn't it? Getting that last bit of perf in the inner hot loop has traditionally often required people to write the entire thing in languages that are unsafe (ffi overhead often being enough that you can't just wrap up the hot loop and call the bit that needs to be performant from elsewhere).
I don't know, because if you're writing programs for things like IoT or embedded, then you're dealing with nothing but wimpy little low power processors where removing a bounds check gives you huge increases. Then, it makes no sense to have checks by default if you're going to get rid of them anyway.
I'll just point out that a wimpy low power processor that costs less than a buck has as much processing power as a 486. And the amount of processing tends to be on small amounts of data in short bursts.
Big problem as I see it is compiler writers using C++ with it's completely broken language and compilation model. Of course they think speed is the only important metric because C++ compilers compiling C++ compilers is very very slow. And they're dealing with trusted data 100% of the time and not the guys desperately trying to patch a zero day vulnerability at 2am.
It isn't because those microcontrollers are not that smart about branch prediction and instruction ordering so the penalty is often bigger.
Same with reordering - might not matter that much on a big modern CPU but may matter way more on a device with a micro cache and very slow division for example.
If you want to experience what using a "safe" language looks like, complete with bounds checking amongst other things, there's the JavaScript ecosystem.
...which has evolved its own, much worse, horrors instead.
Occasionally small changes like this will result in bigger than expected performance improvements. An example of this happened once with C#, when two very tiny changes, each of which were borderline measurable, combined they made a big difference.
IIRC it was in the List.Add method, a very commonly used function in the C# core libs. First one programmer refactored it to very slightly reduce how many instructions were output when compiled. Then a second programmer working on the jit compiler optimizations which also affected this Add method making it a little smaller as well.
Alone, each change was hard to even measure, but seemed like they should be a net win at least in theory. Combined, the two changes made the Add method small enough to be an in-lining candidate! Which meant in real programs sometimes very measurable performance improvements result.
As others in this post have noted, a removed bounds check might also unblock vectorization optimizations in a few cases. One might be able to construct a test case where removing the check speeds thing up by a factor of 16!
One thing that I assume reduces this problem even further is the prevalent use of iterators in Rust. I almost never index an array or vector directly, which means it's impossible for me to use an out of bounds index, and I'd be really surprised if rustc and/or LLVM don't somehow take advantage of that fact (maybe just through unchecked indexing in the standard library's iterator functions)
I think LLVM is smart enough to optimize regular `for i in 0..stuff.len()` loops into the same assembly as `for i in &stuff` loops in almost all cases. I imagine this sort of "we can tell that i is always less than len" optimization is a big contributor to the low cost of bounds checks. In some large portion of cases where they're not needed, the optimizer can already see that.
IME loops with induction variables (integer indexes) often produces better codegen than with iterators. Compare the these two Rust functions for inverting bits: https://rust.godbolt.org/z/cE4vPdbdY
This got improved in Rust 1.65 just this month, but the point stands.
Might be a problem of the number of pass repetitions, where O2 does not rerun the vectorisation after whatever manages to unroll everything but O3 does.
Yes, for example the relatively modern (didn't exist two years ago) implementation of IntoIterator for arrays themselves, gives you an iterator which doesn't use bounds checks since it is going to give exactly each of the things in the array once and it knows exactly how many of them there are.
It will still implement that with a loop over the array where there is a (bounds) check for the integer when it's being incremented (the i < len in int i; for(i = 0; i < len; i++)). That's no different from a loop over a slice, both of which eliminate the bounds check in the body of the loop (the one in list[i]). Arrays do have an advantage however, compilers can see their size so if you have an array (or array reference) and index it with a constant, then that bounds check will be eliminated. Also, array references are cheaper than slices because slices always contain the length.
> Arrays do have an advantage however, compilers can see their size
Wouldn’t vectors have the same advantage in Rust? If we’re iterating over a vector, it’s proveable at compile time that the length is not being modified during the iteration.
Length not being modified during iteration sure, but not the exact length. It's not possible in the general case to determine the exact length of a Vec at compile time, whereas Rust arrays can never change length so their exact length is know statically (unless you've got a slice, which is different)
I think they mean a one-off constant index should never check.
let arr = [1, 2, 3, 4];
// Will not compile a bound check
let x = arr [3];
let v = vec! [1, 2, 3, 4];
// Imagine other code separates these two lines
// Might compile a bound check
let x = v [3];
In the foo_arr case, the index lookup can be optimized out. In the foo_vec case, it can't be, because theoretically you might pass something with only 2 elements or less to foo_vec, and access to the third element will fail.
Same goes for e.g. the slice::windows vs slice::array_windows functions. In windows, you get a slice in the closure, and while the size is guaranteed by the implementation, without there being inlining the optimizer doesn't know about this guarantee. With array_windows, this size guarantee is communicated.
IIRC the Rust in Action book claimed iterators usually omit bounds checks, but if you manually index into the array then it's more likely the bounds check won't be optimized out.
"It seems like at least for this kind of large-scale, complex application, the cost of pervasive runtime bounds checking is negligible."
Right. The myth that bounds checking is expensive may have come from some terrible compilers in the early days. Berkeley Pascal was a notable example. Each bounds check was a subroutine call.
The common cases for bounds checks are:
- It's in an inner loop iterating over arrays. That's the case where the highest percentage of the time goes into the bounds check. It's also the case likely to be optimized out. This is the case people worry about.
- It's in code that doesn't do a lot of subscript operations. So it doesn't matter.
- It's in non-optimizable code that does a lot of subscript operations. That's unusual, but it does come up. An modern case might be Unreal Engine's Nanite meshes, which have lots of small offsets within the data stream. On the other hand, if you don't check that stuff, it's a great attack vector.
If even our performance-critical code moves to languages that always bounds-check, perhaps that will put pressure on ISA designers to add instructions for never-taken branches that just don't participate in any of the branch prediction logic. You'll always get a mispredict on failed bounds checks or final loop condition checks, but you'll avoid causing mispredictions elsewhere.
Yes, some architectures (including x86) have instructions that hint to the branch predictor, but I think they still end up influencing branch predictor state.
This is the sort of thing shamefully absent from RISC-V. Shameful because its designers were in a position to know better.
There are four versions of a conditional branch: to be predicted, almost always taken, almost never taken, and too random to waste a predictor on. Compilers nowadays have "likely" and "unlikely" intrinsics, but offer, yet, none for the last. I think x86 now ignores its "likelihood" branch prefix because back when introduced
it was too often wrong; and now compilers don't emit it because it they know it is ignored.
The "too random to predict" would be good for sorting, where it could provide a 2x speedup. To get the effect today you need to use conditional-move instructions, which have become unfashionable. Getting your compiler to emit a cmov is tricky.
Intel added a special instruction (maybe in skylake?) to use for spinning on an atomic flag update, that just halts until the cache line being watched gets clobbered by a message from some other cache. Compilers don't emit it, unfortunately, even when spinning on what they know is an atomic flag.
There are zillions of such instructions that were good ideas but were never taken up by programmers who they were meant for.
If the instruction is tagged to keep it from claiming a branch prediction slot, it does not thereby evict some other branch from its prediction slot.
A prediction slot is where statistics on past behavior of a branch at a particular address accumulate. If no past behavior is needed to predict whether the branch will be taken, the predictor needs no statistics to decide, and statistics on some other branch instruction may accumulate in that slot instead.
I'm far from an expert here, but don't most modern branch predictors actually keep a shift register of the last few conditional branches, hash that branch history with the instruction's address, and use that as the index into the predictor's state table, and assume hash collisions won't happen? (Hash collisions only have a material impact if two conditional branches in the same hot code path collide, which is much more rare than general hash collisions.)
If my understanding is correct, slots aren't really "taken up", but rather avoiding the branch predictor reduces the probability of hash collisions for other branches.
I also have a question for the crowd: for indirect branch target prediction, are the BTB slots actually tagged with the address? If you have BTB miss, do you pre-emptively stall the pipeline? It's more power-efficient to do so, but maybe it's better to avoid tags and tag-checking and spend that transistor budget on more BTB slots, and just go ahead and mispredict if you get a collision on BTB slots.
Details of actual branch predictors are proprietary, so we must speculate.
You don't stall on branches you have no history of, but apply heuristics that don't depend on history. E.g., backward branches are usually taken, forward branches less so. I think newer chips can try both alternatives, to some degree, although this cannot be carried far as it blows up exponentially.
I was asking about indirect branches, and the Branch Target Buffer... there are two main cases where you're making indirect jumps or indirect function calls in ELF C/C++ executables: shared library function calls (an IP-relative call to a Program Linkage Table (PLT) entry, which contains an indirect jump through a Global Offset Table (GOT) entry). The other common case is C++ virtual function calls, where you're loading the vtable entry into a register and immediately making an indirect function call through that register. You'd like to be able to start speculatively executing that jump (library call)/virtual function call while fetching the GOT/vtable entry is still in progress.
Using only information resident in the architectural registers and the decoded instruction stream, there's no heuristic that can accurately tell you where printf (or this->do_my_thing()) lives in the address space. If the address for printf isn't in the BTB, do you just stall the pipeline while loading the proper GOT entry? I thought that in general, they just hashed the instruction address to get the BTB entry and blindly assumed that BTB entry would be the target address.. (and of course, issue the GOT read and check the assumption as soon as the real GOT entry is read). The alternative would be to stall the whole pipeline until the GOT could be read anyway, so might as well do something... unless you're really concerned about power consumption. Even in extremely power limited situations, assuming hash collisions in the BTB are rare might be less power-hungry than checking the source address in the BTB entry on each and every indirect branch.
Storing both the source and destination addresses in the BTB entry means twice as much space used, and extra latency and power usage in checking the source address. Does anyone here have good knowledge of when it's worth it to store both the source and destination address in the branch target buffer, instead of just storing the target address and presuming hash collisions are rare?
On another side note: has anyone heard of an architecture that supports IP-relative indirect function calls? Such an addressing mode would allow making a function call through the GOT entry without having to load the PLT entry into the instruction cache. I'm guessing that since there's a fair overhead for sharded library calls, the most performance-sensitive code already avoids dynamic library calls, so there's not much to be gained in exchange for the added complexity of the extra call addressing mode.
Interesting questions, arbitrarily hard to research.
Attention seems to be reserved for events that occur in the sorts of tight loops that affect benchmark results. Thus, the first encounter with any branch may stall indefinitely long without (market) penalty.
Game coders have learned to batch their indirect calls according to actual destination, to throw the branch predictor a bone.
A consistent 5 ms difference in micro-benchmarks is definitely not "measurement noise". Noise averages out way before accumulating to 5ms. There must be a reason and it mostly likely relates to the change. So you can confidently say that removing bounds checking (at least with how you did it) is a regression.
... that being said, I'd argue that the most beneficial memory-safety feature of Rust is about temporal things (i.e. prevents UAF etc) instead of spatial ones.
Well, there is both random and systemic error in any experiment, and if 5ms is small relative to anything you'd expect (or there is some other reason to discount it) then it might be related to a problem in the benchmarking setup that's too small to be worth resolving. Any test is good to within some level of accuracy and they don't always average out to infinitely good if you rerun them enough times.
The 5ms isn't the key number. It's 5ms extra over a 28ms baseline, that's about 18% difference. If your noise threshold is 18%, then I think you have to accept that the benchmark probably isn't any good for this stated task.
https://github.com/bheisler/criterion.rs is good for tests like that. It will give you much more than a single number and handle things like outliers. This makes identifying noisy tests simpler.
news_app/ranges_and_joins/cached
time: [28.583 ms 29.001 ms 29.526 ms]
thrpt: [277.45 Kelem/s 282.48 Kelem/s 286.61 Kelem/s]
news_app/ranges_and_joins/cached
time: [33.271 ms 33.836 ms 34.418 ms]
thrpt: [238.01 Kelem/s 242.11 Kelem/s 246.22 Kelem/s]
Given that 33.836/(1000/(242.11*1000)) ~= 8192, my understanding is the time reported here is how long it takes to do 8192 queries. Also it reports three metrics (should be min, median and max). All these means the benchmark harness did run the test for a lot of times and the 5 ms different is not random at all.
Anything between nothing and one most likely correct branch predicted to _not_ jump "branch iff int/pointer > int/pointer".
This kind of bounds check are normally not ever violated (in well formed code) so branch prediction predicts them correctly nearly always.
It also is (normally) just jumping in the bad case, which means with a correct branch predictions thy can be really cheap.
And then cpu "magic" tends to be optimized for that kind of checks at they appear in a lot of languages (e.g. Java).
Then in many cases the compiler can eliminate the checks partially.
For example any many kinds of for-each element iterations the compiler can infer that the result of the conditionally loop continuation check implies the bounds check. Combine that with loop unrolling which can reduce the number of continuation checks and you might end up with even less.
Also bounds checks tend to be an emergency guard, so you tend to sometimes do checks yourself before indexing and the compiler can often use that to eliminate the bounds check.
And even if you ignore all optimizations it's (assuming in bounds) "just" at most one int/pointer cmp (cheap) followed by a conditional branch which doesn't branch (cheap).
Branches add control flow which can inhibit other optimizations, such as vectorization. Compare the codegen of these two functions to double the first 64 elements in a u8 slice: https://rust.godbolt.org/z/hccWGv889
The unchecked version is fully unrolled and vectorized using multiple registers. The checked version must use a loop.
Part of what's going on here is that panics are "recoverable." If the out-of-bounds write occurs at index 61, this will panic, but the writes to lower indexes must have gone through. This means the panic cannot be hoisted out of the loop.
One technique is to add asserts before a block of code to hoist the checks out. The compiler is usually smart enough to know which conditions have already been checked. Here's a simple example: https://rust.godbolt.org/z/GPMcYd371
This can make a big difference if you can hoist bounds checks out of an inner loop. You get the performance without adding any unsafe {}.
Yeah this is because the error message printed contains the location of the error as well as the attempted index. Thus, there are differences between the bounds failures and the optimizer can't hoist the check out (plus probably some concerns due to side effects of opaque functions).
I wonder if there could be a flag to tell the compiler that you don't care about getting the exact distinct panic message for each bounds check, please optimize it. I suppose the assert is a flag, in a way, but I mean something more global and automatic. Maybe the compiler could emit a single shared basic block per function that just says "out of bounds access in function foo".
We've learned to accept that when you turn on optimizations, you lose some lines and variables from your debug info. This is a pretty similar trade-off.
You can use a custom `#[panic_handler]` item that ignores its `PanicInfo` arg and just aborts. The optimizer should notice that it doesn't need to bother with unique messages. However currently this requires either being a no_std program or compiling libstd without its handler, since otherwise its handler will conflict with yours.
Although, if one is building their own libstd anyway, then I believe compiling with `--build-std-features panic_immediate_abort` should also have the same effect.
Actually that made me think... you might be right. I just saw this opinion in earlier threads and repeated it but upon second inspection I either remembered it wrongly, or it is a wrong theory.
I think the issue is more the side effects than the panic message. I have tried making a side effect free loop like for i in 0..44 { v[i]; } but it compiled down to a "if array length is larger than limit X, then call panic_bounds_check". On the other hand, if you replace the v[i] with soon-stable black_box(v[i]), you see that the loop remains. It doesn't know what black_box is doing so it has to run the code. The optimizer is in this case very happy if you have a check before the loop.
But Rust doesn’t have a spec (https://doc.rust-lang.org/reference/ gets closest, but explicitly states “Rust compilers, including rustc, will perform optimizations. The reference does not specify what optimizations are allowed or disallowed” and “this book is not normative”), so it doesn’t promise what kind of error you’ll get or when.
I would think a Rust compiler could hoist the check outside of the loop at least sometimes (it might not want to do it if the loop made changes that are visible from outside the loop, such as changing a volatile variable or doing I/O)
Rust doesn't have "volatile variables" that's a weird C thing which then ends up in C++ and related languages because nobody wants to touch this mess.
The purpose of "volatile" is to mark MMIO so that the memory reads and writes don't get optimised out because they actually perform I/O. Everywhere you see volatile abused to do something else (yes including Unix signal handlers) that's because C has a hammer and so now everything looks like a nail to C programmers. In a few cases this abuse is guaranteed to work (and results in a lot of heavy lifting elsewhere to achieve that) in most cases it's just luck.
Rust has generic intrinsics which perform memory read/write that won't get optimised out, which is the actual thing C needed when volatile was invented.
This is a pretty unsatisfying benchmark. Can we pin the thread to a core and re-run a few times to de-noise? And how about using an actual CPU bound program? Even a significant speedup in the code will be lost in an application like this where you spend so much time in I/O.
I imagine the reason bounds check are cheap is because of the branch predictor.
If you always predict the in bounds path, the check is almost free.
You also do not really care about flushing the pipe on an out of bounds index, since very likely normal operations can not go on and you move over to handling/reporting the error, which likely has no need for significant throughput.
Also I would just like to note that safe arrays aren't a unique rust feature. Even writing your own in C++ is not hard.
It's not hard, but when the idiomatically used containers aren't bounds-checked, most code out in the wild won't be, either. Worse yet if you are writing a library and have to interop with other code which will also use those idiomatic types.
These days, C++ really should be compiled with bounds-checked indexing and iterators by default. Unfortunately, this is still not a scenario that is well-supported by tooling.
In my experience with them, the performance hit is far more substantial that a few percent, except in situations where the compiler can elide the checks altogether. For example, simply iterating over std::vector with _ITERATOR_DEBUG_LEVEL=1 is twice as slow if you work with iterators explicitly instead of writing it as a range-for.
And I'm not sure if it can be substantially better, given that C++ as designed simply needs to do more checks to ensure validity - to catch cases like comparing iterators belonging to different containers, or iterators getting invalidated when containers get resized or when the corresponding element is deleted outright. This all can't be done with simple ranges or slices, which is why VC checked iterators maintain a reference to the parent container.
Yep, unless your code is wrong, the bounds check will always be predictable. Which makes it free in a sense. But sometimes it will block other optimizations, and it takes up space in the caches.
> If you always predict the in bounds path, the check is almost free.
Note that you often only branch in the "bad" case, which means even on systems without branch prediction it tends to be not very expensive, and compilers can also eliminate a lot of bounds checks.
The instructions generated make a big difference. Modern processor specifications commonly quote how many instructions of a type can be "retired" in a cycle. They can retire lots of conditional branches at once, or branches and other ops, when the branches are not taken.
So it matters whether the code generator produces dead branches that can be retired cheaply. Probably, optimizers take this into account for built-in operations, but they know less about the happy path in libraries.
This is a motivation for the "likely" annotations compilers support. The likely path can then be made the one where the branch is not taken. Code on the unhappy path can be stuck off in some other cache line, or even another MMU page, never fetched in normal operation.
The cost seen here is likely from something else, though. Keeping array size in a register costs register pressure, or comparing to a stack word uses up cache bandwidth. Doing the comparison burns an ALU unit, and propagating the result to a branch instruction via the status register constrains instruction order.
Even those might not be at fault, because they might not add any extra cycles. Modern processors spend most of their time waiting for words from memory: just a few cycles for L1 cache, many more for L2 or L3, an eternity for actual RAM. They can get a fair bit done when everything fits in registers and L1 cache, and loops fit in the micro-op cache. Blow any of those, and performance goes to hell. So depending how close your code is to such an edge, extra operations might have zero effect, or might tank you.
Results of measurements don't generalize. Change something that looks like it ought to make no difference, and your performance goes up or down by 25%. In that sense, the 10% seen here is noise just because it is hard to know what might earn or cost you 10%.
In rust there is `#[cold]` for functions as well as (nightly only) `likely(cond)`/`unlikely(cond)` and some tricks you can have something similar in stable rust.
Also branch paths which lead guaranteed to an panic tend to be treated as "unlikely" but not sure how far this is guaranteed.
The reason performance decreased when he removed bounds checking is because asserting bounds is very useful to a compiler. Essentially, the compiler emits code like this:
1. if (x >= 0) && (x < arr_len(arr))
2. get element from array index x
3. else
4. throw exception
5. do more stuff
The compiler deduces that at line 5 0 <= x < arr_len(arr). From that it can deduce that abs(x) is a no op, that 2*x won't overflow (cause arrays can only have 2^32 elements), etc. Without bounds checking the compiler emits:
1. get element from array index x
2. do more stuff
So the compiler doesn't know anything about x, which is bad. The solution which apparently is not implemented in Rust (or LLVM, idk) is to emit code like the following:
1. assert that 0 <= x < arr_len(arr)
2. get element from array index x
3. do more stuff
If what you said is true, then this is not Rust specific, and we should observe performance improvement in all languages, should they introduce bounds-checking during code emission. Is there compiler flag that we can turn on bounds-checking in GCC for C++ programs? Is there research to compare performance before and after that flag?
> The solution which apparently is not implemented in Rust (or LLVM, idk) is to emit code like the following:
The solution to what? You appear to be suggesting that compilers should insert bounds checks even when the programmer doesn't ask for them, which, sure, I'm down for that, but the whole point of this discussion is that the bounds checking that currently gets done has a negligible cost in practice.
Maybe I wasn't clear. The compiler should compile code without bounds checking as if bounds checking occur. The likely reason for the slowdown reported in the blog post is that Rust doesn't do this, which may be a compiler bug.
Can someone smarter than me enlighten me when you would consider disabling bounds checking for performance? In ways the compiler is not already doing so? The article starts with a bug that would have been prevented by bounds checking. It's like talking about how much faster a car would go if it didn't have to carry the extra weight of brakes.
I think the point of the article is the other way around: when starting from a language like C that doesn't have bound checking, moving to Rust will involve adding bounds checks and then an argument will be made that this will regress performance. So to test that hypothesis you start with the safe Rust code, and then remove the bounds check to emulate what the C code might be like. If, as in the article, you find that performance is not really affected, then it makes a C-to-Rust migration argument more compelling.
Sometimes the extra speed can be relevant. Knowing what the upside _can_ be can help inform the choice whether it's worth it.
Secondly, even assuming you want runtime bounds checking everywhere, then this is still a useful analysis because if you learn that bounds-checking has no relevant overhead - great! No need to look at that if you need to optimize. But if you learn that it _does_ have an overhead, then you have the knowledge to guide your next choices - is it enough to be worth spending any attention on? If you want the safety, perhaps there's specific code paths you can restructure to make it easier for the compiler to elide the checks, or the branch predictor to make em smaller? Perhaps you can do fewer indexing operations altogether? Or perhaps there's some very specific small hot-path you feel you can make an exception for; use bounds-checking 99% of the time, but not in that spot? All of these avenues are only worth even exploring if there's anything to gain here in the first place.
And then there's the simple fact that having a good intuition for where machines spend their time makes it easier to write performant code right off the bat, and it makes it easier to guess where to look first when you're trying to eek out better perf.
Even if you like or even need a technique like bounds checking, knowing the typical overheads can be useful.
I've seen bounds checks being compiled to a single integer comparison followed by a jump (on x86 at least). This should have a negligible performance impact for most programs running on a modern, parallel CPU. However, for highly optimized programs that constantly saturate all processor instruction ports, bounds checks might of course become a bottleneck.
I think the most preferable solution (although not always possible) would be to use iterators as much as possible. This would allow rustc to "know" the entire range of possible indexes used at runtime, which makes runtime bounds checking redundant.
In Rust you can usually use iterators to avoid bounds checks. This is idiomatic and the fast way to do things, so usually when using Rust you don't worry about this at all.
But, occasionally, you have some loop that can't be done with an iterator, AND its part of a tight loop where removing a single conditional jump matters to you. When that matters it is a real easy thing to use an unsafe block to index into the array without the check. The good news is then at least in your 1 million line program, the unsafe parts are only a few lines that you are responsible for being sure are correct.
I don't really see this as a person who wants to turn off the bounds checking for any real reason, but as someone who just wants to have an idea of what the cost of that bounds checking is.
Sometimes the programmer can prove that bounds checks are unnecessary in a certain situation, but the compiler can't prove that itself, and the programmer can't communicate that proof to the compiler. Bounds checks can result in lost performance in some cases (very tight loops), so unsafe facilities exist as a workaround (like `get_unchecked`).
>Can someone smarter than me enlighten me when you would consider disabling bounds checking for performance?
Because it is faster. Worst case you are triggering a branch miss, which is quite expensive.
>It's like talking about how much faster a car would go if it didn't have to carry the extra weight of brakes.
So? Every wasted CPU cycle costs money and energy. Especially for very high performance applications these costs can be very high. Not every car needs brakes, if it doesn't need to stop by itself and crashing hurts nobody they are just waste.
In some very hot code (most times loops with math stuff were it prevents some optimizations and/or the compiler fails to eliminate it) it can lead to relevant performance improvements.
Because of this you might find some rust code which opts out of bounds check for such code using unsafe code.
But this code tends to be fairly limited and often encapsulated into libraries.
So I agree that for most code doing so is just plain stupid. In turn I believe doing it on a program or compilation unit level (instead of a case by case basis) is (nearly) always a bad idea.
I would expect an iterator into a slice to not incur any bounds checking, as the compiler can deduce the end pointer one time as start + size. So idiomatic looping should be maximally efficient you'd hope.
The compiler shouldn't have to deduce anything, an Iterator shouldn't have a bounds check to begin with. It ought to be using unsafe operations under the hood, because it can guarantee they will only be called with valid arguments.
In Rust at least, once you instantiate the iterator, the array it's iterating over can't be mutated until the iterator is dropped, and that can be statically guaranteed at compile time. So you don't need to bounds-check at every access; you can decide at the outset how many iterations there are going to be, and doing that number of iterations will be known not to walk past the end.
I don't think that's always possible in practice: consider Vec<T>, whose size is only known at runtime. A Vec<T>'s iterator can only do runtime bounds checking to avoid walking past the end.
The Rust compiler guarantees that the memory location and size of the iterated array do not change during the operation. So the iterator can be a pointer that iterates until it points to the end of the array. There is no need to do bounds checks: the pointer only goes over the valid range.
In C/C++, the array can change. It might be moved, de-allocated or resized in the current or a synchronous thread. So the pointer that iterates until it is equal to the end pointer, might point to invalid data if the size, location or existence of the vector changes.
I think we're suffering from some fuzziness about what bounds checks we're referring to. Even in your example, you only need to check the size of the Vec<T> when you instantiate the iterator, not each time the iterator accesses an element, because at the time the iterator over the Vec<T>'s contents is instantiated, the Vec<T>'s size is known, and it can't change over the life of the iterator (because mutation is disallowed when there's an outstanding borrow). With a regular for-loop:
for i in 0..v.len() {
println!("{:?}", v[i]);
}
you check the length at the top (the `v.len()`) and also for each `v[i]`. The first is unavoidable, but the second can be skipped when using an iterator instead, because it can be statically guaranteed that, even if you don't know at compile time what concretely the length is, whatever it ends up being, the index will never exceed it. Rust specifically differs from C++ in this respect, because nothing in that language prevents the underlying vector's length from changing while the iterator exists, so without per-access bounds checks it's still possible for an iterator to walk past the end.
When I read "iterator" I think of an object that points into the vector and can be advanced. For Rust's vector, that is std::slice::Iter (https://doc.rust-lang.org/std/slice/struct.Iter.html). When you advance an iterator, you must do a bounds check if the vector is dynamically sized; otherwise, you don't know when to stop. I.e., if I have
let mut it = vec.iter();
println!(it.next());
println!(it.next());
println!(it.next());
You are right that theoretically a range-based for loop that uses iterators does not need to do bounds checking because a compiler can infer the invariant that the iterator is always valid. In practice I don't know enough about LLVM or rustc to know whether this optimization is actually happening.
That's not an index-out-of-bounds check tho, that's just the regular loop exit condition. In a C-style for(i=0; i<42; ++i) that would be the i<42 part.
The check that can (and is) elided by the iterator here is the one that would be in the body of the loop, when you try to use i to index into the array. The nice thing about it is that it doesn't require the compiler to be smart at all.
Relevant part:
> for-in-loops, or to be more precise, iterator loops, are a simple syntactic sugar over a common practice within Rust, which is to loop over anything that implements IntoIterator until the iterator returned by .into_iter() returns None (or the loop body uses break).
(The other uses of the 'for' keyword it refers to are unrelated to loops)
Most online debates are filled with illogical opinions on theoretical issues. You get people on this site complaining that they have to spend money on a catalytic converter because it's not required for the car to run and only prevents other people from getting cancer.
For proper bounds checking in C you first need to communicate the "bounds" to be "checked" to all the places where it matters, just a pointer isn't enough. Unfortunately many old-school C APIs (including the stdlib) often don't pass pointer-size pairs around, but just pointers (and IMHO the biggest problem in the C world is not so much the language, but outdated APIs like the C stdlib or POSIX which have mostly been designed in the K&R era and which basically "encourage" unsafe usage).
Other then that, I doubt that any reasonably pragmatic and experienced C programmer will ever argue against runtime bounds checking from a performance point of view. Even in hot loops one can usually move the bounds checking to a place outside the loop.
My experience in some gamedev and embedded development circles is that there are several that will religiously argue against it, without ever having run a profiler to validate their perception of the world.
FWIW, I'm also coming out of the game dev world, and we shipped all our games with (custom) asserts enabled (which includes bounds checking in containers). I did regular performance tests and the enabled asserts were never a big enough performance hit to justify removing them (around 1..3% across an entire frame), the ability to get 'clean' crash reports from out in the wild, triggered by asserts instead of random segfaults was more than worth it (we also had special 'hot path asserts' which were not included in the release builds though, but those were rarely used). The only downside is that the executable gets around 25% bigger (and easier to reverse engineer) because of all the embedded assert-condition strings - but I guess this could have been solved differently (since we were using our own assert macros anyway).
That is what I have been doing since forever, in C and C++ land.
In C++, I kind of always strived to use the higher level abstractions, with support for bounds checking. First the compiler provided frameworks from pre-C++98, then configuring builds so that STL types also bounds check in release.
In C is harder, but a mix of asserts, using TU as if they were modules exposing ADTs (Modula-2/Pascal units style), can also be a way to help mitigate issues. Kudos to the Code Complete book for some of the ideas[0].
However, trying to sell this experience always feels like quixotic in those environments, so kudos for actually doing it.
[0] - "Code Complete. A Practical Handbook of Software Construction"
A lot of people don't care about security but they do care about performance. Some examples: game engines, various solvers, simulations. This stuff runs 24h/day to produce results. A few percent here and there is a huge cost and time penalty, especially that in this kind of code bounds checks and similar have bigger consequences than in your typical CRUD app. Another example is programming for small devices which are not upgraded for years. It just kills user experience if you accumulate tens of milliseconds of performance (and again on those devices the penalty is usually bigger because CPUs are not that smart about prediction).
If you care about security of processing outside input the are many other options (or you can use sanitizers or safe practices in C). For significant part of the C programming world it's just not important though.
This is mostly true. The one exception is when bounds checking prevents the compiler from vectorizing your code. Then you may pay up to a 50% penalty for the innocent looking `if`.
What if a compiler were to only allow an array access when it can prove that it's in bounds? Wherever it can't you'd have to wrap the array access in an if, or otherwise refactor your code to help the compiler. Then you'd have no panicking at least and more predictable performance.
>What if a compiler were to only allow an array access when it can prove that it's in bounds?
Even very good static analysis tools have a hard time doing this. In a language like C++ this would effectively mean that very few index operations can be done naively and compile times are significantly increased.
Performance is likely reduced as well over the trivial alternative of using a safe array.
With bounds checking by default, even if a compiler can't statically prove that an index is in bounds, if the index in practice is always in bounds, the compiler inlines the check/branch into the calling function, and you're not starved of branch prediction resources, the check will be "free" because the branch will always be predicted as taken.
The article mentions measurement noise several times without addressing the uncertainty. It would help to add statistical tests, otherwise the spread could let us conclude the opposite of what is really happened, just because we are out of luck.
Back in the 80's, I was programming in Fortran on a VAX 780. I had converted a complex eigenvalue-eigenvector routine from Algol to Fortran, and, after verifying that it worked, decided to see how much bounds checking added to the runtime. I figured since so much array referencing was done that this would be a worst case scenario. In that particular situation, it added about 30%. I decided that this was well worth it and kept array bounds checking on in all my code.
That is pretty silly. Your eigen library had in it everything it needed to ensure its own safety, so anything more just added slowness.
A check performed in a library, or a condition ensured in a library, is wholly as good as the same work done in the compiler. Compilers are not magic, they are just programs.
Very interesting. One thing that I'm curious about is adding the bounds-check assertion to `get_unchecked` and seeing if that has a significant effect.
Happens from time to time, I’ve seen folks going around libraries looking for “perf unsafe” and benching if removing the unsafe actually lowered performances.
One issue on that front is a question of reliability / consistency: on a small benchmark chances are the compiler will always trigger to its full potential because there’s relatively little code, codegen could be dodgier in a context where code is more complicated or larger.
Then again the impact of the bounds check would also likely be lower on the non-trivial code (on the other hand there are also threshold effects, like branch predictor slots, icache sizes, …).
For Virgil, there is a switch to turn off bounds checking, for the only reason to measure their cost. (It's not expected that anyone ever do this for production code). Bounds checks do not appear to slow down any program that matters (TM) by more than 2%. That's partly because so many loops automatically have bounds checks removed by analysis. But still. It's negligible.
And yet globally both of these companies spend 20% of CPU cycles on TLB misses. Should we turn off virtual memory protections and go back to raw physical memory (or something?)
I am a Rust fan, but 10% degradation in performance (29ms to 33ms) is not "a pretty small change" nor "within noise threshold". If the accuracy of the tests are +/- 10% then that needs to be proven and then fixed. I didn't see any evidence in the article that there is, in fact, a 10% error, and it looks like there is a genuine 10% drop in performance.
To be clear, removing the bounds checks led to the observed performance degradation. Your statement beginning with "I am a Rust fan, but..." suggests that you might have interpreted it as the other way around
Unsafe accesses do not act that way, they compile to exactly the same code as array accesses in C.
The tests aren't bunk. There are a variety of reasons why the assertions generated by array index checks can be useful for LLVM, and there is also a fair amount of noise in any end to end test like this. The main point is that it clearly isn't a primary bottleneck (which should be pretty obvious in a test that takes 30 ms and performs under 2000 bounds checks).
I had written a piece of code that was trying to process things at disk line speed and bounds checking was the first bottleneck I discovered in profiling. I think this is very dependent on the application. It’s unlikely to pop in many applications, but if you’re really trying to push the machine, it seems possible to be the first bottleneck.
This is one of the main reason you should use indices instead of pointers to store references to other objects (regardless of if you’re using Rust or C++): memory safety. Indices can be bound-checked, pointers can’t.
99. ..% of "real" applications are bottlenecked by I/O, be it disk or network or synchronization (as in, waiting for something else to happen)
If you're not in a HPC or heavily resource constrained context you can safely ignore the performance implications of choosing whatever programming language you like.
This isn't really true, some languages handle I/O much better than other languages. We migrated a Python application to Go that was about as simple as you can get and mostly blocked by I/O (call DB, transform storage format to Thrift, respond to caller with Thrift) and saw SUBSTANTIAL improvements in performance. Approximately 40% improvement in p99 latency and, more notably, 15x improvement in throughput.
MySQL is a huge amount of code doing a variety of things in each query -- networking, parsing, IO, locking, etc. Each of those can easily have significant and hard to predict latencies.
Benchmarking that needs special care, and planning for whatever it is you want to measure. A million trivial queries and a dozen very heavy queries are going to do significantly different things, and have different tradeoffs and performance characteristics.
The benchmark was specifically testing the hot path of a cached query in their MySQL caching proxy. MySQL wasn’t involved at all.
I agree completely that benchmarks need care, hence my point that the article is disappointing.
The author missed the chance to investigate why removing bounds checks seemed to regress performance by 15%, and instead wrote it off as “close enough.”
It would have been really interesting to find out why, even if it did end up being measurement noise.
"just a cached query" isn't like it's just a hash lookup. You're still doing IO, network protocol decoding, multithreaded synchronization, etc etc. It's certainly not a CPU bound program.
"Around Easter 1961, a course on ALGOL 60 was offered … After the ALGOL course in Brighton, Roger Cook was driving me and my colleagues back to London when he suddenly asked, "Instead of designing a new language, why don't we just implement ALGOL60?" We all instantly agreed--in retrospect, a very lucky decision for me. But we knew we did not have the skill or experience at that time to implement the whole language, so I was commissioned to design a modest subset. In that design I adopted certain basic principles which I believe to be as valid today as they were then.
"(1) The first principle was security: The principle that every syntactically incorrect program should be rejected by the compiler and that every syntactically correct program should give a result or an error message that was predictable and comprehensible in terms of the source language program itself. Thus no core dumps should ever be necessary. It was logically impossible for any source language program to cause the computer to run wild, either at compile time or at run time. A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to -- they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-Tony Hoare, 1980 Turing Award Lecture (https://www.cs.fsu.edu/~engelen/courses/COP4610/hoare.pdf)