A bit of clickbait. Brief explanation of why that huge perf gap on a well-solved problem:
1. The `memcpy` idea is trivial. I know it's there to serve a different argument. But saying "professional" can't think of `memcpy` is clickbait.
2. Having a bad char table makes a totally different algorithm. A "professional" should at least finish reading their textbook, instead of writing the naive algorithm in Version 3.
This page has some interesting ideas about "safety" without any pre-conditions.
As a starter, Java is safe. Probably safer than most of the C-based candidates on this list. And we are only speaking about Java without tooling vs C with tooling.
The open-sourced toolchain of RISC-V accepts any kind of extensions as long as they are in the reserved code space, basically out-of-the-box. The current ratifying body does not hold any special toolchain or legal influence other than its publicity, thus irrelevant in this comparison.
My Gosh, what was happening with this project? At first they target to ship in chrome 97, then 101, then 105, then 109, now 113. Are we building fusion reactors that are constantly four months away?
Well this is bad. Just checked the code, `RandomState` invokes `wasi::random_get()` on wasi target. It means virtually every std program in Rust would require random number generator privilege in the upcoming WASI unless the author intentionally avoid using default HashBuilder and somehow treeshake the stuff out(?).
This seems to be a fundamental conflict of interest between server-side programming and client-side sandboxing. Now the preventive security measures cost us not only performance on the table, but also privilege spam.
This article turned up with a valuable proposition. Although I would still argue open source software itself is as political as it can get (Forked code has no value. Communities of people behind the code matters), but I do agree that such thing should have some kind of process. It should not be the personal FM radio for the editor-of-the-day.
RISC no longer has the clear border as it had 30 years ago. Nowadays RISC just means an ISA has most of the following points:
1. Load/Store architecture
2. Fixed-length instructions or few length variations.
3. Highly uniform instruction encoding.
4. Mostly single-operation instructions.
These four points all have direct benefits on hardware design. And compressed ISA like RVC and Thumb checks them all.
On the contrary, "fewer instruction types", "orthognoal instructions" never had any real benefit beyond perceptual aesthetics, so as a result they are long abandoned.
Finally some updates after all these years. I'm just curious about the relations between the older "nanoprocess" and the now Component Model. Previously the nanoprocess model promised fine-grained containerization, I wonder if the Component Model still targets the same granularity?
On ISA design, it is just so wrong to endorse the opinion of a single person. ISA design has three significant parties of interest: IC designers, compiler authors, and software developers.
No one can master these three fields all at once. No one. Not even Linus (who would be a master software developer + a decent compiler specialist).
Chip and ISA are almost irrelevant stuff. No matter how you emphasize on end product performance, you cannot deny there SHOULD be a metric to compare ISA designs.
1. The `memcpy` idea is trivial. I know it's there to serve a different argument. But saying "professional" can't think of `memcpy` is clickbait.
2. Having a bad char table makes a totally different algorithm. A "professional" should at least finish reading their textbook, instead of writing the naive algorithm in Version 3.