The real benefit of static analysis is non-local. It's automatically catching that A, which has some entry condition, is being called correctly by B, way over in another file maintained by someone else. This is why the borrow checker in Rust, which is a form of static analysis, is so valuable. It keeps you from global ambiguity over who owns what.
Compare linters and style checkers, which merely enforce local no-nos. The value in the more elaborate tools is automatically catching errors where each piece of code looks fine in isolation. Proof of correctness work tends to focus on local issues, because that's the academic history. But that's not where it is really valuable.
That's not what static analysis (which includes things like type-checking) means and that's not something you can do with general-purpose code, for reasons related to Rice's theorem.
Type-checking is one of the classical examples of abstract interpretation — it's mentioned in the original Cousot & Cousot paper (1977). Maybe you're thinking of something else?
Static analysis is more than just abstract interpretation, and the OP doesn't discuss abstract interpretation at all, but talks about a design by contract approach inspired by Ada/SPARK.
Z3 is a contraint solver (SMT, SAT) which makes total sense to use in a design by contract system, but AFAIK there is zero link to abstract interpretation (which from what I understood about it, is way harder to use).
How else are you going to check the contracts at compile time?
I honestly don't know if that's how DrNim works or if there are other ways to achieve that.
But a similar tool for Python does it exactly this way:
https://github.com/pschanely/CrossHair
"CrossHair works by repeatedly calling your functions with symbolic inputs. It uses an SMT solver (a kind of theorem prover) to explore viable execution paths and find counterexamples for you."
...so that's where I got the idea that DrNim is probably doing the same
I have no good reason to know about "full formal chains," and yet attempting to read the article still managed to grind my programmer ego into the dirt. :) Major respect to people who operate in that world.
Related to Nim in general: I'm excited to use Nim more. Prior to finding Nim, I had this list of languages to explore in more depth: Python, Ruby, Lisp, Pascal (though I still plan to explore those). Oh, and find a good scripting language, in addition to one from the compiled language group. Oh, and avoid anything too "enterprise". So if you know anything about Nim, you know why this feels pretty amazing.
> Python, Ruby, Lisp, Pascal (though I still plan to explore those). Oh, and find a good scripting language, in addition to one from the compiled language group
It's interesting that you want to explore Python and Ruby more but seem to consider finding a good scripting language as a separate goal. They're both definitely scripting languages, so if you don't expect either to be good, why do you want to explore them more?
Sorry for the confusion; the part of the comment I was responding to was this:
> I had this list of languages to explore in more depth: Python, Ruby, Lisp, Pascal (though I still plan to explore those). Oh, and find a good scripting language, in addition to one from the compiled language group.
The phrasing of "oh, and find a good scripting language" makes it sound like that's a completely separate goal than exploring Python, Ruby, etc. in depth. It reads to me like you plan to explore those four languages, then afterwards go find a good scripting language. This doesn't seem like it would be necessary if you find either Python or Ruby to be a good scripting language when you explore them, which implies that you don't expect to have found either of them to be good. Given that, I was curious about your motivation to explore those languages if you didn't expect them to be good.
Based on your confusion, it seems like you weren't intending these to be read as separate goals. I'm still not sure why you'd explicitly call out "finding a good scripting language" after mentioning you wanted to explore Python and Ruby, but my initial question appears not to be relevant.
I used to deploy and support enterprise software, which I chose from a contingency point of view: More users, more important users, and thus a lesser chance of being "wrong" about the choice of software. I had some good experiences, some bad.
Then I tried non-enterprise software. I picked out some lightweight tools and platforms and swore I'd drive them into the ground and get to the same goals, just to see what I could do without the enterprise overhead components. This experience practically changed my life--lightweight has been _the_ way to go for me ever since, whether by hook or by crook.
You might consider Elixir for scripting / systems language as well. LiveView is lightweight but allows amazingly quick Web-dev. Nim incorporates some Pascal concepts too. There's also a nice package to write native functions for Elixir in Nim [1].
Not yet released but the DrNim documentation is available online [1]. DrNim is the tool that will provide Static Analysis to Nim (based on z3). This will be probably presented in the upcoming Nim Online conference in June 20 [2].
I can't understand how people use languages without static analysis. The only edge case I can think is using any scripting language to write a script that is less than 1k lines. Are there any pros for languages that don't have static analysis other than the fact that it allows "rapid development/prototyping"?
So you don't know any successful projects in JavaScript, Clojure, Erlang, Elixir, Groovy, Julia, R, etc. etc. etc.? ;)
There are well-known advantages and repeating them here will incite another flame-war.
I can only speak from my 35 years of coding experience, spending most of that time in the confines of static typing:
Most problems I solve today are not good fits for static types. They cause major headaches along the road as your customer's needs change and as I get a better understanding of the problem domain.
> Most problems I solve today are not good fits for static types.
This would seem to involve a widespread misconception about what static types are really about. Languages that "don't have static types" are not inherently more flexible than languages that do expose them; one can always translate a "dynamic" program structure to a statically-typed language in a way that preserves arbitrary flexibility in refactoring, prototyping and the like. Many languages even provide a standard `Dynamic` type for this very purpose.
The problems solved by static analysis are not the big problems, and when it's useful to solve some of them solving them in a less-than-complete way is good enough. (Even languages like Java implicitly align with this by allowing runtime reflection with strings.* )
You might just have to try it in order to get it. Write 10k lines, reflect on what your pain points were. I recommend giving Common Lisp a spin, its popular free implementation SBCL even has rudimentary and incomplete "static" analysis ('compile is a function available at runtime) useful for catching typos or telling you your types are off or telling you that with a bit of type hinting in places it could make your code compile to much faster assembly instructions (and you can immediately verify by calling the built-in function 'disassemble). After that you might understand a bit, even if you still don't like it. Meanwhile multi-million line projects exist regardless.
* And to continue a bit more with the article's vein of static analysis going beyond static types, there are many cases where we'd like to specify a function precondition which is too complex to prove holds in the real program. We don't lament, though. And if it's important, we check it at runtime. It's risky business after all running code without checks just because some external source approved it -- the external source could have made a mistake, or unexpected runtime conditions (many coming from the nature of programs having to run on physical hardware) could happen.
Sometimes you still are required to solve them as far as you can. The problems solved by systems like TLA+ or ACL2 can also be useful, and those are both free tools (ignoring engineer time). The absolute cost of solving a problem isn't the best measure for how "big" it is.
It's a good reminder though in reference to the GP that there's a richness in methods beyond what one might expect a programming language by itself to handle -- can you imagine someone proclaiming "I can't understand how people use languages without [input from my multi-million dollar] static analysis [tool]."?
There's some argument in favor of clarity I think.
Statically typed languages often require a lot of additional annotation of types and more awkward logical structures that interfere with readability. Obviously there is a counter weight that not having types be explicit creates its own set of problems. If type inference was perfect and languages were able to support things like union types seamlessly I think it would be possible to get close. But I've used languages with quite good inference (Kotlin) and it is still regularly a battle with the compiler to get it to believe something is "safe" which actually can only ever be safe but requires me to add extraneous syntax all over my code just to "convince" the compiler.
I quite like languages with incremental typing for this reason - you can create a statically defined interface and data structures but if you already know constraints are satisfied you don't need to pamper the compiler to continuously reassure it inside the function. But these would fall into your definition of languages "with static analysis" I imagine.
assert() is pretty much the industry standard for invariants, right? they're usually evaluated at runtime but it should be perfectly possible for a static analysis tool to "evaluate" them statically.
assert() by itself is typically not the same as "defining an invariant". An invariant is an expression which should be true for the whole life-time of the block/function/program/anything, but assert() only defines an assertion at a specific point.
Compare linters and style checkers, which merely enforce local no-nos. The value in the more elaborate tools is automatically catching errors where each piece of code looks fine in isolation. Proof of correctness work tends to focus on local issues, because that's the academic history. But that's not where it is really valuable.