Yeah, the NULL pointer is a pretty weird part of the standard - it makes some sense, but leads to weird situations. That said, I think your last point needs a bit of clarification. What you've described is actually already impossible per the standard - with a few exceptions, it is illegal to use pointer arithmetic to address past the size of an allocated object (Because those pointer values may not even be valid for the architecture), so it is technically impossible to use pointer arithmetic on a valid pointer to end-up with the NULL pointer - it would require calculating an address outside of the current object.
So the question of what happens when you actually do that is purely up to your compiler and architecture. In most cases, if you manage to get the NULL pointer value through pointer arithmetic, it will still compare equal to the 'actual' NULL pointer and treated as if it was a literal 0, so that doesn't allow you to get around NULL pointer checks. The only situation where it really matters if the NULL is only known at runtime, since that may have implications on optimizations. Since dereferencing the NULL pointer being undefined behavior, the compiler can remove such dereferences, but it can't remove the dereference completely if it can't prove the pointer is always NULL. There is nothing preventing the compiler from adding extra NULL checks in that aren't in your code however, which would foil the plan of generating a NULL pointer at runtime to dereference it. So unless your compiler explicitly allows otherwise, you cannot reliably access the memory located at the value of the NULL pointer - as far as the standard is concerned, there is no such thing.
Talking specifically about the ARM vector table, that largely works ok because only the CPU ever has to actually read that structure, normally you C code won't have to touch it (If you even define it in your C code. The example ARM programs define the vector table in assembly instead). If you did ever have a reason to read the first entry of that table from C though, you could potentially run into issues (Though I would consider it unlikely, since the location of the vector table isn't decided until link-time, at which point your code is already compiled).
On that note, it's worth adding that POSIX requires NULL to be represented by all zero bits, which is useful. Lots/most programs actually rely on this behavior, since it is pretty ubiquitous to use `memset` to clear structures, and that only writes zero bits.
(Sorry for the long comment, I've just always found this particular part of the standard to be very interesting)
Again, I am unsure how this relates to "The Billion Dollar Mistake" I linked above and was referring to.
I am not sure your point. The reason modern systems don't map memory to 0x0 is because NULL pointers exist. It is a reflection of a leaky abstraction equating pointers to references. That leaky abstraction has (or so the argument goes) caused >$1B in software bugs.
The other mindset would be "malloc always has to allocate memory or otherwise indicate failure; you cannot cast from a integer to a pointer; you cannot perform arithmetic on a pointer to get a new one; you must demonstrate there are no hanging references when freeing". This is essentially what rust did for safe code.
The reason why I indicate so much skepticism is that rust is the first time I've seen the problem solved well in the same problem space as C. Ada has problems of its own. It's more about how small assumptions can have massive economic (and health, and safety, and ethical) consequences. Certainly comparable to a speculative execution bug leaking memory in an unprotected fashion--in both cases the bugs find their way through human error in evaluating enormously complex systems for incorrect assumptions :)
Tbh it’s not the most meaningful of statements, but it’s food for thought.