> While not perfect, given its copy-paste compatibility with C, you can opt-in into bounds checking by using the STL data types, or similar.
I've heard of abuses of templates to add something that resembles proper ranged types, but it is not a path I'd want to go down.
Partly, because C++ templates are not an excuse for a proper type system, even if they are a tool that can be used to force a type system into the language.
A proper type system lets you do things like add centimeters and meters together, divide by seconds, and pop out a proper unit and ensure no overflow happens.
A proper type system lets you easily, without the visual noise or compile time hit that templates have, add bounds checking wherever it is appropriate.
type Age is Integer range 0 .. 130
type Day_Of_Week is (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
subtype Working_Day is Day_Of_Week range Monday .. Friday;
For all the flak it gets, Ada did this right long ago. And a huge part of all this is compile time only, and you can opt in/out of the runtime checks as performance dictates.
I'd kill for a proper "milliseconds" type everywhere, one that can only have other milliseconds added to it. And then double the fun with a "seconds" type, and a language that keeps track of which is which for me. Why the heck am I writing timeInMs or timeInSeconds still?
Subtyping integers should be a basic and primitive operation in any modern language.
I know most Wirth based languages relatively well.
Actually your examples are even possible in Pascal.
type
Age = 0 .. 130;
Day_Of_Week = (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
Working_Day = Monday .. Friday;
However in 1992, C++ was a viable path forward on PC, whereas Ada was too expensive and Modula-2 compilers weren't getting the love they needed from OS vendors (https://freepages.modula2.org/compi.html), so C++ was the best alternative to move from Object Pascal into cross platform development.
So while C++ isn't perfect, its copy-paste compatibility was key for its adoption, but also the reason why C++ will never be Ada in regards to security.
Just like TypeScript or the JVM guest languages, have to deal with the semantics of the underlying platform.
In any case, in what concerns opt-in, C++ provides the options, and using C already in 1992 was a matter of being religiously against C++. RAII, proper strings and arrays data structures with bounds checking and reference parameters instead of pointers were already available.
From the 5 surviving Ada vendors, there is a free option, so anyone that cares about security with the issues that C brings into C++, can also have a look at it.
In fact, GenodeOS, seL4 and NVidia Autonomous Vehicle project are all using Ada/SPARK to some extent, as they considered investing into Ada a better option than adopting Rust, given its history in production.
> Why the heck am I writing timeInMs or timeInSeconds still?
Because "keeping track of types" is much more complicated than one would think at first. How many languages are there that can do something like (3A * 4V == 12VA), (12VA / 3A == 4A)? What if you want to use a generic routine that is just about the integers? It will lose track of all the types. I claim that there is still no nice way to do this stuff except scripting languages (dynamic types).
One of the languages that go a little more into types than mainstream languages but still are somewhat close to mainstream is Haskell. Have you ever tried to use it? You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible, most of them probably the result of a PhD thesis. And you never quite get at what you want (if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions). That is unless you're very, very smart, and have so much investment into Haskell that you know the current limits, so you're able to choose a model that is still within the bounds of representability in the language.
For all of us mere mortals and practitioners, we use simple integers and get our job done.
For everyone still clinging on the idea of types, I strongly believe the type system vs runtime values dichotomy is a dead end. Sometimes you need dynamic computations with types - sometimes types of expressions are known statically, but sometimess not. That's why most mainstream type systems keep it simple and provide escape hooks (like void-pointers or RTTI).
> You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible
"Often mutually incompatible" suggests that you should be able to name, say, ten such pairs of extensions. Can you? (I've used Haskell professionally for nearly eight years and I can't name one such pair. I could probably find a few marginal cases if I spent 30 mins, but they don't come up in practice.)
> most of them probably the result of a PhD thesis
Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?
> if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions
Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?
It's a long time that I've tried to do anything in Haskell, but I had to use extensions such as GADTs, type families, existential quantifications, kind signatures, multiparam type classes...
I can't name you any ones that are mutually incompatible because I simply don't remember what all of those were about anymore, but I remember there was something (I'll take back the "often"), and I'll bet that some of those that I named above are now deprecated or superseded by others.
> Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?
Well, no, I can't, since I don't follow it anymore, and if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions. I know someone who's made significant contributions to GHC also as part of their PhD work, though.
My actual point was simply that the idea of modelling more than basic ideas (like physical layout of data) with static types is a bad idea. You quickly get sucked in to places where you need to be able to express even more complicated typing deductions, and you'll end up resorting to unergonomic stuff like templating, dynamic types and various RTTI approaches, and all kinds of language extensions that are really hard to grasp or that are still in the making. As a guy of modest intelligence, I feel like a dog chasing its tail (and I assume that this is how the super-smart guys out there feel like, too, a lot of the time).
> Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?
Really anything where you're trying to represent a concept with compile time language primitives (which is what Haskell guys are about, a lot of the time - and take pride of). I'm convinced you know this situation yourself where you start of modelling something statically (in type language) and then you have this additional requirement coming in, and you don't know how to represent it with compile time language feature anymore, so you end up scratching everything and building it with a more modest mindset, most of the stuff available with run time values - maybe a few more run-time case switches or asserts that invalid situations can't happen, but overall simpler and easier to understand code.
> I'll bet that some of those that I named above are now deprecated or superseded by others.
All of them are in common use.
> if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions
I'm not entirely sure what you're trying to say. I thought you were trying to say that many things that people depend on in Haskell are merely the result of a PhD thesis, i.e. perhaps a messy body of work designed by an inexperienced person. On the contrary, language extensions are well designed and thoroughly thought through.
On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.
> On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.
It's not even that, I'll assume that the majority of Haskell novices are aware that there are humble limits to the expressiveness of most (including Haskell's) type systems.
The real problem IMHO is the dichotomy that this approach entails. You have to decide which language to use to model a concept (type language or run time language), and if the idea of what you want to do changes a little, chances are you can't continue on your current path.
In other words, I think the type system approach prevents a lot of the fluidity that I want from a programming language as a tool to get things done.
Personally my experience doesn't match that description. I find it's fairly reasonable to convert existing code between "run time type checking" and static types as necessary. I realise that it doesn't click for everyone immediately. Sorry you had a bad experience with Haskell. If you ever try again I'm happy to help. Feel free to drop me a line with any questions whatsoever. My contact details are in my HN profile.
> I'm not entirely sure what you're trying to say.
My understanding is that "because it took a whole PhD thesis to come up with, it must take a lot of effort to understand".
But I don't think that holds up. Much like the "if I had more time I'd have written a shorter letter", it can often take effort and skill to make a thing smaller and easier to understand.
I've heard of abuses of templates to add something that resembles proper ranged types, but it is not a path I'd want to go down.
Partly, because C++ templates are not an excuse for a proper type system, even if they are a tool that can be used to force a type system into the language.
A proper type system lets you do things like add centimeters and meters together, divide by seconds, and pop out a proper unit and ensure no overflow happens.
A proper type system lets you easily, without the visual noise or compile time hit that templates have, add bounds checking wherever it is appropriate.
Take a look at https://en.wikibooks.org/wiki/Ada_Programming/Type_System#De...For all the flak it gets, Ada did this right long ago. And a huge part of all this is compile time only, and you can opt in/out of the runtime checks as performance dictates.
I'd kill for a proper "milliseconds" type everywhere, one that can only have other milliseconds added to it. And then double the fun with a "seconds" type, and a language that keeps track of which is which for me. Why the heck am I writing timeInMs or timeInSeconds still?
Subtyping integers should be a basic and primitive operation in any modern language.