I feel like this is the direction Julia is going in. It's in this interesting space between statically and dynamically typed languages - technically it's dynamically typed, but you get a lot of the safety and performance guarantees afforded by static type systems through Julia's aggressive type inference [1]. As the type system gets richer, it should only get better on this front.
If you haven't checked out Julia yet, it's a beautiful language. There's a reason why Graydon (the guy who made the very early versions of Rust) likens it to a Goldilocks language [2].
The difference between Julia and Big Bang is that Big Bang will check your types at compile time. You'll know before any of your code executes whether or not it will yield any type errors.
I don't know about Julia or Big Bang, but in Haskell your values conforming to types for serialization / deserialization would typical be checked once when parsing, and throw an error (or use a Maybe type) if anything is off.
Yes, that's the standard approach, but I was wondering how that meshed with not having to declare types (maybe the answer is just, that's the one place where you have to - or that's disallowed), but I was curious.
Chapel Programming Language? By which I mean, Chapel seems like a great project to through support behind to add these feature sets. It is already very capable, had a good amount of resources, a roadmap, and seems that it would lend itself to this.
Not quite, polymorphism is a bit iffy in Go (there is interface, and it's doable but not great). Type inference is extremely limited (this is also due to a weak type system, with comparison point being ML-family languages for examples) . Global type inference is off the picture without generic/ polymorphic typing.
I don't think Go achieves the look and feel of working with Python and the like.
Yes, one of the design goals was to blur the gap between dynamically and statically typed languages by creating a language that is safe to run and fast to compile, read and write.
When you use the `auto` type, the compiler will infer the type automatically from the context of the proc invocation or from the proc body. The inferred types can be different for the different parameters. So you can have a proc declaration that looks like:
proc inferTypes(a, b: auto) # `a` & `b` can accept different argument types
Within a proc, you can also create bindings & variables without needing to specify a type:
let iBinding = 1
var fVariable = 1.2
The following typeless code will compile & run without any complaints or problems:
How does Nim hold up if you put "auto" everywhere you possibly can in a non-trivial program? (Tone note: Straight question. I have no idea and am honestly interested.)
Context: Type inference is hard. Hindley-Milner is famous for making it possible, but the farther you stray from it, the harder it gets to do with no human-added annotations. I hope the authors of "Big Bang" are intimately familiar with the issues involved or they're going to be in trouble. If they aren't they ought to correct that.
I'd also suggest that the Haskell community would be happy to share their experiences on that front if they are asked, and unless the authors are already experts, they really really should ask. Only good things can come from asking.
(The majority of my professional & personal programming has been along the Shell-Python-C-C++ axis. My preference for static types increases approximately logarithmically with the size of the program; historically, I preferred Python for most quick scripting needs, but for larger programs, I was glad of static types, so I would switch to C++.)
Now, Nim has replaced C++ for me completely (and also expanded downwards into the upper end of Python's territory). The work I'm currently doing in Nim is well into the "I prefer static types" area of the spectrum.
Maybe some of the Nim core devs would have more experience with this situation.
IIRC, several of the Nim core devs are familiar with Haskell, and consider it to be one of the reference languages guiding aspects of the Nim language design.
You wouldn't really do that. But Nim can do basic type inference. So to initialize string and numeric variables, you don't need to explicitly declare the type. https://nim-by-example.github.io/variables/
The type inference extends to a few other situations I believe. Essentially it is a typed language, but it doesn't always make you declare things when they are obvious.
If you haven't checked out Julia yet, it's a beautiful language. There's a reason why Graydon (the guy who made the very early versions of Rust) likens it to a Goldilocks language [2].
[1] https://stackoverflow.com/questions/28078089/is-julia-dynami...
[2] https://graydon2.dreamwidth.org/189377.html