Thanks for the insightful post. I stand corrected on the TCO issue, I had indeed misread the documentation, and I've posted a correction to my original post accordingly. The bounds checking issue is also resolved to my satisfaction.
> I'm not sure loop-based control flow is actually necessary, since (I believe) one can, say, parallelise over an enumerated iterator (e.g. slice.iter().enumerate()), which contains the indices. One can then .map() and read from the appropriate indices as required.
The loop-based control flow is, strictly speaking, never necessary, as the two formulations are mathematically equivalent. It's just that for many algorithms the loop-based approach is more intuitive (to many people) and readable, and has less boilerplate. Your simple_parallel library looks syntactically closer to what I'd like, though I'm not sure if it's still being maintained.
Thanks. I'm glad the option is there, but the current implementation looks quite tedious to use in long numeric expressions, and would greatly sacrifice readability. Ideally, I'd like something along the lines of fastmath { <expression> } blocks or function / loop level annotations. Is something like that possible with Rust's metaprogramming, perhaps?
> - numerics/scientific computing/machine learning are slowly taking over the world. It is bad to have random/occasional heisenbugs in systems that influence decisions from the personal to the international.
That's theoretically true, but (in my opinion) practically irrelevant. The thing about numerical kernels is that input is constrained by the mathematics involved and the output is rigorously verifiable. In practice, I can't imagine a realistic case where a memory safety error would not be caught by the usual verification tests that any numeric code of any importance is routinely subject to. That's why programmers in this domain don't normally think about memory safety as a separate issue at all, it's just a small and not particularly remarkable part of the normal correctness testing. A guarantee of memory safety still doesn't free you from having to do all those tests anyway. Obviously, this is much different for systems software that takes inherently unpredictable user input that's difficult to sanitize.
> - games are very, very often touching the network these days, and thus are at risk of being exploited by a malicious attacker.
The argument here is, I think, much stronger than for numerical software. Nevertheless, even for online games (that are still only a subset of computer games), network data is comparatively easy to sanitize and memory safety issues wouldn't typically lead to exploitable attacks. I've never heard of a game used as a vector for a serious attack in any context, but at least it's somewhat conceivable in theory.
> I'm not sure loop-based control flow is actually necessary, since (I believe) one can, say, parallelise over an enumerated iterator (e.g. slice.iter().enumerate()), which contains the indices. One can then .map() and read from the appropriate indices as required.
The loop-based control flow is, strictly speaking, never necessary, as the two formulations are mathematically equivalent. It's just that for many algorithms the loop-based approach is more intuitive (to many people) and readable, and has less boilerplate. Your simple_parallel library looks syntactically closer to what I'd like, though I'm not sure if it's still being maintained.
> To shortcut your search: https://doc.rust-lang.org/std/?search=fast . (I apologise that I didn't link it earlier, I was on a phone.)
Thanks. I'm glad the option is there, but the current implementation looks quite tedious to use in long numeric expressions, and would greatly sacrifice readability. Ideally, I'd like something along the lines of fastmath { <expression> } blocks or function / loop level annotations. Is something like that possible with Rust's metaprogramming, perhaps?
> - numerics/scientific computing/machine learning are slowly taking over the world. It is bad to have random/occasional heisenbugs in systems that influence decisions from the personal to the international.
That's theoretically true, but (in my opinion) practically irrelevant. The thing about numerical kernels is that input is constrained by the mathematics involved and the output is rigorously verifiable. In practice, I can't imagine a realistic case where a memory safety error would not be caught by the usual verification tests that any numeric code of any importance is routinely subject to. That's why programmers in this domain don't normally think about memory safety as a separate issue at all, it's just a small and not particularly remarkable part of the normal correctness testing. A guarantee of memory safety still doesn't free you from having to do all those tests anyway. Obviously, this is much different for systems software that takes inherently unpredictable user input that's difficult to sanitize.
> - games are very, very often touching the network these days, and thus are at risk of being exploited by a malicious attacker.
The argument here is, I think, much stronger than for numerical software. Nevertheless, even for online games (that are still only a subset of computer games), network data is comparatively easy to sanitize and memory safety issues wouldn't typically lead to exploitable attacks. I've never heard of a game used as a vector for a serious attack in any context, but at least it's somewhat conceivable in theory.