It's kind of hard to decode the explanation given that it spends a lot of text on useless formalism instead of substance, but it seems like this language has three fatal problems:
1. Borrowed pointers are not a first class concept, but just syntax sugar over returning a parameter from a function, i.e. &T or &mut T are not actual types in Rust parlance
2. There is no mention on how to achieve safe shared mutable data or even just shared read-only data, i.e. the equivalent of Rust's Arc<Mutex<T>> or Arc<T>, which probably means the language has no support
3. It seems there is no way for a struct/record/tuple to contain another non-primitive data type without the latter being allocated on the heap
So as far as I can tell aside from the dependent types this language is much less powerful than Rust and cannot fully utilize the CPU, and hence far from the goal of having a perfect Rust+dependent types language.
I don't think it aims to be "as powerful as Rust", and I think that's okay. It's decidedly a narrow, pure, opinionated language, while Rust has practicality as one of its main priorities. This language is more like Haskell, or even more Haskell than Haskell, in that while some people might want to do real projects in it, its main purpose (from what I can tell) is to explore a concept. Nothing wrong with that.
Just because you're not capable of interpreting the formalism doesn't make it useless or not substantive. This readme wasn't written for you it was written for a type theorist.
Eh, I'm not sure about that. A type theorist would expect to see a clear description of what translations are performed as part of compiling this language, and a rigorous argument that this helps solve a real issue, e.g. wrt. memory management. It's hard to see either in the linked readme - it reads like a description of some promising, rough experiment, but not quite fully worked out in a way that would make it clearly understandable to uninvolved folks. I'm not saying that there's anything wrong with that, and it's definitely on par with many Show HN's. Just trying to call for some perspective.
The parent comment comes off as arrogant but so does this one. Oftentimes concepts like these are presented in language that's more impenetrable than necessary. I don't know if that's true here but it's a genuine problem.
> Practically, this means that you can write your program in the ordinary lambda-calculus without any extra restrictions or annotations, and at the same time are allowed to control how resources are used in the program.
You can rewrite any Haskell program into the ordinary lambda-calculus, and this can be automated. The only possible problem might be that Haskell uses lazy evaluation, and I don't expect this to interfere with the control of resources but I could be wrong.
Haskell also allows you to share heap-allocated immutable data structures between threads, right? I think someone said this language doesn't support reference sharing across threads, but that could've been wrong
But as long as you're not storing closures in those shared data-structures, these structures will always be acyclic, and could be collected through simple reference counting. Perhaps that's an acceptable compromise.
cyclic, but without closure? unless closure is so broad as to capture essentially any haskell value. it's not going to be obvious when there is a cyclic reference (e.g. if you use `let x = repeat 1` instead of the definition above)
Why should any of what you talk about be the responsibility of the language as opposed to the linter, an annotation library and static analyzer, a library, or clean design? Remember the Unix philosophy.
So you have linked me to a list of security vulnerabilities - are you suggesting all security vulnerabilities need to be addressed by the language spec and the compiler? I doubt such a wide and diverse range of problems be addressed at such a low level.
At a glance, most of those look like bounds checking problems. tcc (Tiny C Compiler) allows you to compile in bounds-checking with all pointer dereferences.
C++ STL implementations generally have bounds checking all over the place - this requires compiling in debug mode IIRC, but it is evidence of the possibilities of safety without language or compiler support.
Lisp-style meta-programming can get you basically anything you want from a language without language or compiler support.
> So you have linked me to a list of security vulnerabilities - are you suggesting all security vulnerabilities need to be addressed by the language spec and the compiler?
These vulnerabilities are typically in code implemented in C or C++.
I am suggesting that if you want to have a secure system, they have to be addressed by the language; if you are happy with systems that have exploitable memory management bugs, there are lots of existing UNIX variants to choose from.
> I doubt such a wide and diverse range of problems be addressed at such a low level.
May I suggest reading about some low-level languages that have been used in production and address these problems to varying degrees of success, for example ESPOL (from 1960s), Mesa/Cedar, Modula-2, Ada, or Rust.
> tcc (Tiny C Compiler) allows you to compile in bounds-checking with all pointer dereferences.
This sounds impossible, there's not enough information in the C type system to know in the general case what the bounds are.
> C++ STL implementations generally have bounds checking all over the place
This wasn't true last I checked for libc++ (the most "modern" implementation), and isn't applicable for any non-STL container in a program anyway; nothing prevents using a plain C array.
> this requires compiling in debug mode IIRC
I'm not aware of anybody who enables this in production because the culture of C++ is all about performance.
> but it is evidence of the possibilities of safety without language or compiler support
Language or compiler support would reduce the performance overhead, because it makes it easier for the compiler to elide checks that can be statically proven to always succeed.
> Lisp-style meta-programming can get you basically anything you want from a language without language or compiler support.
Meta-programming cannot remove features from a language, which is typically the problem here.
> This sounds impossible, there's not enough information in
> the C type system to know in the general case what the
> bounds are.
Good point. My thought was that the runtime could keep track of all the mallocs and then make dereferences check whether the memory accessed was in a valid region of the stack or the heap. But this doesn't sound workable. You'd have to resolve every single dereference to a list of ranges in memory (`O(log n)` for every `foo[bar]` I would imagine).
Perhaps every pointer could be implemented not only as a raw pointer but one with a "valid" range attached, which indicated how many bytes prior and following the pointer that were part of the block original block the pointer was calculated as an offset from. Any dereference would check that it is within the range, and that is only O(1) for every `foo[bar]`.
> This wasn't true last I checked for libc++ (the most "modern" implementation)
> ...
> I'm not aware of anybody who enables this in production
> because the culture of C++ is all about performance.
This conversation is happening in the context of what is possible without language (and maybe without even compiler) support. The original comment I replied to was saying neut had fatal flaws because it didn't address borrowing and lacked some features that Rust has. The fact that some people refuse to use C++ in certain ways is interesting but doesn't detract from my original point.
> Meta-programming cannot remove features from a language, which is
> typically the problem here.
The simplest type of Lisp macros only add features, but it is possible to create a new kind of "top-level context" (for want of a better term). Your macro system does have to be aware of all the primitives in your Lisp dialect, though, for this to work. There is a certain term for this that I can't recall at the moment.
> Perhaps every pointer could be implemented not only as a raw pointer but one with a "valid" range attached
The main problem with this is that it's incompatible with every existing system C ABI.
There's also the problem of real-world C code converting pointers to integers and back again, but the compiler could define uintptr_t and intptr_t accordingly and code that uses other integer types is broken anyway.
> it is possible to create a new kind of "top-level context"
I'm not familiar with that, it sounds like quite some effort but I'll grant you that likely it can achieve what you claim.
I'm pretty sure you're right on all counts; I'm not even sure that shared immutable borrows would work under the very simple transform given in the README.
Edit: Also, the user-facing language doesn't seem to have linearity at all, i.e. all values may be cloned. But linearity is something that people use to guarantee certain invariants all the time in Rust (and ATS, I've heard), so this seems like a misstep. It also means that memory allocation is implicit, which makes things less predictable for users.
yes, i found the first example quite surprising. there doesn't seem to be any request to copy the string; it is simply unintuively copied/produced three times. i would prefer the language have some kind of `dup` or something, which gives you an additional reference to the object.
as it stands, it has transferred allocation from the runtime to the compiler. but the objective is to put it into the hands of the developer, since only then is it known when reading and writing code.
i begin to suspect that a typed joy may be more practical (or rather: match my desired improvements) than clarifying allocations in haskell.
1. Borrowed pointers are not a first class concept, but just syntax sugar over returning a parameter from a function, i.e. &T or &mut T are not actual types in Rust parlance
2. There is no mention on how to achieve safe shared mutable data or even just shared read-only data, i.e. the equivalent of Rust's Arc<Mutex<T>> or Arc<T>, which probably means the language has no support
3. It seems there is no way for a struct/record/tuple to contain another non-primitive data type without the latter being allocated on the heap
So as far as I can tell aside from the dependent types this language is much less powerful than Rust and cannot fully utilize the CPU, and hence far from the goal of having a perfect Rust+dependent types language.