The attribute language seems a little clunky in its current form (although apparently it's an upcoming C2X standard). The idea of separation logic applied to these problems is promising though.
If you want something you can use with C code today, that has had many applications and integrates a little better IMO, then see Frama-C, the open source platform for extensible C verification:
> Given the central role that C continues to play in systems software, and the difficulty of writing safe and correct C code, it remains a grand challenge to develop effective formal methods for verifying C programs.
One of the reasons C likely continues to play a large role in systems programming, despite regularly highlighted defects by researchers, is because new language designers have largely given up on concept of header files and forward declaration.
When a standards organization wants to work with hardware vendors to define a cross-platform interface in terms of human readable symbols rather than explicit values, with the possibility of many competing private translations, C is a natural choice because header files provide a necessary bridge or abstraction.
It would be interesting if a 'Better C' language was also a 'Better H' language and provided a literate interface declaration formats without macros or inline functions. Eliminating support for macros and inline functions from headers, and making inlinability a private implementation detail, should not theoretically prevent future compilers from inlining implementation code from multiple sources into a single unit as a performance optimization.
This is a bizarre take on the popularity of C. It completely ignores the notions of existing ecosystems (compilers for every platform, kernel code, immense number of APIs) and programmer population (C is taught in every uni).
Instead it ties down C's popularity to the language's ability to separate interface from implementation via header files, and claims no other language can achieve that.
It also overlooks the downsides of header files (like unnecessary recompilation).
Upon some reflection, I could add to the list of reasons for the popularity of C:
* Interop with every language.
* Existing codebases.
* Years of virtual platform excusivity in embedded systems, and OS.
* Momentum (people like to get into things that are popular).
I think the fundamental reason is that there aren't that many alternatives. Of course, if I were to write application software of any kind I wouldn't choose C, but for things like FFI/interop or embedded, anything that isn't C, C++ or Rust is a complete non-starter, and even C++ and Rust are often a stretch.
> How is C++ a stretch? I don’t think there is any domain left where C++ would not be applicable but C is.
Reusable code. Your library written in C++ can't be called as easily from Python, PHP, Perl, C, Pascal, Rust, Go or any of the other languages people like to use.
Just do a search for uses of libffi and you'll get a pretty good idea why C is still more popular than C++.
That’s not true. C++ code can be called from foreign languages just as easily as C, you just have to provide C-like interface and mark it as extern “C”.
Sure, but that means I have to provide the interface myself. One of the main advantages of C++ over C is that you don't have to pass around pointers to structs, so you're stuck between using C++ as basically C, or duplicating every object the API needs to expose.
Obviously you can make it work, and it probably makes a lot of sense for larger libraries where the advantages of working with C++ really pull their weight, but regardless, FFI is still one of the (admittedly fewer and fewer) niches where C shines.
> you're stuck between using C++ as basically C, or duplicating every object the API needs to expose
Yes, you need to provide the C-like interface on the API boundary, but internally you can just use normal C++ for everything else. Sure, that C-like interface will not be as nice and comfortable as it would have been with C++ interface, but then again, neither it would be if you used C instead of C++.
> FFI is still one of the (admittedly fewer and fewer) niches where C shines.
No, it’s C linkage that shines, and C++ can use C linkage just fine.
> That’s not true. C++ code can be called from foreign languages just as easily as C, you just have to provide C-like interface and mark it as extern “C”.
And ensure that you catch every exception and turn it into an error code?
But even C doesn't really have header files from language viewpoint. The compiler doesn't make distinction between code coming from a .h file or a .c file. Heck, I can't name a single language off-hand that would have explicit header-file like mechanism.
Edit: I suppose JavaScript + WebIDL would fit? Although nobody uses them, except browsers themselves.
Ada is one. It splits the interface (specification) and implementation (body) into separate files (.ads and .adb). This is also unlike C which just does a "copy-and-paste" wherever the #include occurs.
This also makes its formal verification (SPARK dialect) nice because you can add pre and post-conditions to the interface without cluttering up the code body.
The annotations look pretty complex. It’s a whole other type language grafted on to C, and at least to me, even with the explanation in the introduction, very difficult to get an intuition of.
It reminds me of Google’s closure compiler-annotated JavaScript, where the ES4 type system was encoded in comments around the code. That was pretty frustrating to use, and switching to TypeScript where the types are actually part of the language was a significant improvement.
I suppose the benefit here is that you can annotate existing C code without re-writing it in a new language. Although I didn’t read enough to find out how well the refinement language can describe existing C coding patterns, which may not be provably safe. Similar to how type-safe JavaScript has to eschew certain control flows and meta programming, or null-safe Java has to conform to a certain style of programming to satisfy the null ness checker.
> The annotations look pretty complex. It’s a whole other type language grafted on to C, and at least to me, even with the explanation in the introduction, very difficult to get an intuition of.
I salute the idea of fixing C shortcomings instead of starting over from scratch (Rust, Zig), but clearly this mix of large functional annotations on top of procedural code adds some burden and inelegance.
The automated formal code proving will appeal security researchers, but I doubt most developers will want to go through that.
As a developer, I would rather use checked-C since it's only a thin overlay requiring little syntactic sugar on top of C99. https://github.com/microsoft/checkedc
However, unlike RefinedC, it requires a new compiler and headers...
What I find even more perplexing is that they decided to add new syntax for these annotations but still decided to stuff them into strings and add non-ascii symbols into them:
[[rc::constraints("{s = {[n]} ⊎ tail}", "{∀ k, k ∈ tail → n ≤ k}")]]
APL meets GCC inline assembly syntax. A dream come true.
At this point I wonder if it would be much more trouble to just port the code to Rust.
I didn't realize that this was Coq, that makes more sense.
Still, if this is meant to be used by C coders it might be worth stooping to our level, many of us won't be comfortable with this mathematical notation. I know I'm not, and I know I'm not the most mathematically illiterate in general.
When you say "may not be provably safe", do you mean 1) it is safe but hard to prove, or 2) it is not safe? RefinedC is sound, so it can't handle 2. For 1, while it focuses on automation, it has an escape hatch with which you can use full prover, so it should be able to handle anything that is actually provable.
I meant 1). Drawing by example from TypeScript and Java nullness checkers, certain programming idioms are fine in practice and programmers can reason about them, but they're difficult to express in a provable way. Sometimes it's limitations in the tools; other times it's actually unsafe but safe in practice due to an invariant maintained elsewhere but not actually expressed in the code under consideration.
OK. That wasn't clear at all from the description of "foundational" in the article, which instead discussed the "trusted computing base" of the provers.
It's a small thing, but i wish people and companies would give diwnloads (like pdfs) sensible names. Paper.pdf (or statement.pdf from a bank) is so generic and won't help me find it later.
This looks interesting and offloading the actual proving to coq is a great idea. That allows them to not have to spend the time writing, verifying, and convincing others that their prover is correct.
A sibling commented that the annotations look complex, but what's being described is complex. Even if this were made a core part of a language, much of that complexity probably couldn't be elided.
If you want something you can use with C code today, that has had many applications and integrates a little better IMO, then see Frama-C, the open source platform for extensible C verification:
https://frama-c.com/