So you have linked me to a list of security vulnerabilities - are you suggesting all security vulnerabilities need to be addressed by the language spec and the compiler? I doubt such a wide and diverse range of problems be addressed at such a low level.
At a glance, most of those look like bounds checking problems. tcc (Tiny C Compiler) allows you to compile in bounds-checking with all pointer dereferences.
C++ STL implementations generally have bounds checking all over the place - this requires compiling in debug mode IIRC, but it is evidence of the possibilities of safety without language or compiler support.
Lisp-style meta-programming can get you basically anything you want from a language without language or compiler support.
> So you have linked me to a list of security vulnerabilities - are you suggesting all security vulnerabilities need to be addressed by the language spec and the compiler?
These vulnerabilities are typically in code implemented in C or C++.
I am suggesting that if you want to have a secure system, they have to be addressed by the language; if you are happy with systems that have exploitable memory management bugs, there are lots of existing UNIX variants to choose from.
> I doubt such a wide and diverse range of problems be addressed at such a low level.
May I suggest reading about some low-level languages that have been used in production and address these problems to varying degrees of success, for example ESPOL (from 1960s), Mesa/Cedar, Modula-2, Ada, or Rust.
> tcc (Tiny C Compiler) allows you to compile in bounds-checking with all pointer dereferences.
This sounds impossible, there's not enough information in the C type system to know in the general case what the bounds are.
> C++ STL implementations generally have bounds checking all over the place
This wasn't true last I checked for libc++ (the most "modern" implementation), and isn't applicable for any non-STL container in a program anyway; nothing prevents using a plain C array.
> this requires compiling in debug mode IIRC
I'm not aware of anybody who enables this in production because the culture of C++ is all about performance.
> but it is evidence of the possibilities of safety without language or compiler support
Language or compiler support would reduce the performance overhead, because it makes it easier for the compiler to elide checks that can be statically proven to always succeed.
> Lisp-style meta-programming can get you basically anything you want from a language without language or compiler support.
Meta-programming cannot remove features from a language, which is typically the problem here.
> This sounds impossible, there's not enough information in
> the C type system to know in the general case what the
> bounds are.
Good point. My thought was that the runtime could keep track of all the mallocs and then make dereferences check whether the memory accessed was in a valid region of the stack or the heap. But this doesn't sound workable. You'd have to resolve every single dereference to a list of ranges in memory (`O(log n)` for every `foo[bar]` I would imagine).
Perhaps every pointer could be implemented not only as a raw pointer but one with a "valid" range attached, which indicated how many bytes prior and following the pointer that were part of the block original block the pointer was calculated as an offset from. Any dereference would check that it is within the range, and that is only O(1) for every `foo[bar]`.
> This wasn't true last I checked for libc++ (the most "modern" implementation)
> ...
> I'm not aware of anybody who enables this in production
> because the culture of C++ is all about performance.
This conversation is happening in the context of what is possible without language (and maybe without even compiler) support. The original comment I replied to was saying neut had fatal flaws because it didn't address borrowing and lacked some features that Rust has. The fact that some people refuse to use C++ in certain ways is interesting but doesn't detract from my original point.
> Meta-programming cannot remove features from a language, which is
> typically the problem here.
The simplest type of Lisp macros only add features, but it is possible to create a new kind of "top-level context" (for want of a better term). Your macro system does have to be aware of all the primitives in your Lisp dialect, though, for this to work. There is a certain term for this that I can't recall at the moment.
> Perhaps every pointer could be implemented not only as a raw pointer but one with a "valid" range attached
The main problem with this is that it's incompatible with every existing system C ABI.
There's also the problem of real-world C code converting pointers to integers and back again, but the compiler could define uintptr_t and intptr_t accordingly and code that uses other integer types is broken anyway.
> it is possible to create a new kind of "top-level context"
I'm not familiar with that, it sounds like quite some effort but I'll grant you that likely it can achieve what you claim.