And how do you know you've constricted your data enough? You don't and you get bugs. You don't have a perfect specification you are building.
In the micro scale in terms of variables this is not the biggest problem that exists with programs. Enums don't somehow become garbage because it's technically possible to overwrite them with garage. The macro state of the entire system adds much more complexity and bugs. There is too many things that you have to keep in mind and systems are too big for one single person to understand it all. Pure functional still has this macro state. The state is stored in the program counter and by carrying along the current state with it.
If you break any programming task into fine enough steps it will feel like lego. It's nothing special. If I need to print out the numbers 1 through 10 there is a single canonical way my whole team could come up with independent from one another.
>I will sometimes go days or weeks without even running the code I'm writing because I just have confidence in it.
On a small scare this is possible but once you start working on a larger team or if you have to integrate your code with constantly changing systems you will want to be testing your code.
>[1] It's still possible to write logic errors, but they're the only major class of bugs you get really.
You can still make security bugs and performance bugs. Logic bugs is a giant category of bugs and are what most bugs belong to.
> And how do you know you've constricted your data enough?
Types are composable. Sum-types and product-types allow the composition of smaller types into larger ones. They come in the form of discriminated-unions and records in FP languages. So when it comes to the question of how do I know when I've constricted my data enough, it's when I know all the components of all types have been constricted enough.
You don't have to design some massive schema for your entire application. You can absolutely do it as you go. Just create a type that represents the smallest set of values that are needed for whatever concept it reprsents. Then compose those.
> Logic bugs is a giant category of bugs and are what most bugs belong to.
I think logic bugs are relatively rare once you constrain the possible states your code can be in - and they usually come about because of some cyclomatic complexity issue (i.e. putting the if/else branches the wrong way around). Logic just happens to be the bit of your codebase that can't be captured in types (well, not with trivial types anyway). It's the bit of what we write that we interpret from the human world. That's our job, to translate the requirements into code, and the logic is that bit and therefore more prone to human error. I still find I have fewer of them when I have good constraining types that limit the scope for fuckups.
The current state of the world can absolutely be captured in types, then the logic is simply a fold over a sequence of events that take the old state of the world and transforms it into a new one. This is the best way of modeling time also, which is a major source of bugs in imperative code. Mutating data structures kills any effective modeling of time, so cheats like locks are needed. Even with locks there's no effective history and no understanding of now, which makes code difficult to reason about.
That is the idealised end goal of pure functional programming. The reality is most people will still use an `int` when they should use a constrained value (say [1..10] rather than [int.min..int.max]); and so their program can still get into a bad-state™. The more you constrain though, the fewer bugs you will have.
This is the lot of most devs, working out how much risk you're willing to take.
* Those using dynamic languages take the most risk. They do it because they believe they can write code quicker. Their trade off is lots of runtime errors, or runtime checks, or building lots of unit tests.
* Those using languages like Java or C# take fewer risks, as the compiler can help them out, but they don't tend to create types like `Month` that can only hold the constrained set of ints [1..12]. So they run the risk of accepting 13. They also have extensible type-hierarchies (inheritance) that means its impossible to know the full scope of any 'abstracted' interface reference passed around. They must cross their fingers and hope nobody screwed up the implementation. They're also not composable, which leads to additional logic problems as they're glued together manually.
* Those using languages like Haskell don't have the inheritance problem. Everything is sum-types, product-types, or exponential-types (functions) and is therefore composable. This creates an opportunity to create the idealised program (as I mention above). Logic bugs still exist and usually people don't constrain their types completely, but the situation is often much less risky and turns into more robust code.
> On a small scare this is possible but once you start working on a larger team or if you have to integrate your code with constantly changing systems you will want to be testing your code.
I run two development teams. But I also write a lot of research prototypes, this tends to be where I will just have some open ended project that I am playing around with - often programming with types to see what 'level ups' I can get from doing so. But even in a day-to-day project I could easily spend a few days writing code before I need to check it. For example, I recently wrote a whole language virtual-machine: AST, type-system, type-inference, type-checking, and evaluation without needing to run it - all the types lined-up and each subsystem was clear from a logic point-of-view. Currently, I'm building a C# monad-transformer system for language-ext [1] based around transducers, this is pure abstract type-level programming - I haven't run it yet. If I can run it in my head and it type-checks, that's usually good enough.
Of course at some point I do actually run the thing ;)
>I know when I've constricted my data enough, it's when I know all the components of all types have been constricted enough.
Do you not see that this is a tautology?
>That's our job, to translate the requirements into code, and the logic is that bit and therefore more prone to human error.
The requirements given are typically ambiguous and it is our job to come up with the complete specification ourself. Our specification may have bugs ignoring the implementation. I am not denying that with enough diligence you can mostly avoid errors in implementing the spec, but I am saying finding the correct specification for what your code should do is too hard to do compared to implementing a wrong spec and then fixing it as bugs come up.
>Those using languages like Haskell don't have the inheritance problem. Everything is sum-types, product-types, or exponential-types (functions) and is therefore composable.
Inheritance is just a different way to create sum types.
>I know when I've constricted my data enough, it's when I know all the components of all types have been constricted enough.
> Do you not see that this is a tautology?
It might be if you hadn't butchered what I wrote. If I have a type called Month, and it can only accept values 1-12. Then I know I have constrained that type enough. If I then create types called Day and Year and constrain those I know they're constrained enough.
If I then compose Day, Month, Year into a type called Date and check the rules of the number of Days in a month so that an invalid date can't be instantiated then I have a more complex type leveraging the simpler ones. I could then put Date into a record type called Appointment, etc. etc. For each type I create I know what data I am trying to represent, so I constrain the type at that point. There's no tautology here, it's just composition of types. Making larger types from smaller ones and making sure they can't ever hold bad state.
> Inheritance is just a different way to create sum types.
Not really, they're open. Sum-types are closed. i.e. they represent a concrete set of states. The openness of an inheritance hierarchy is the problem. We nearly never need completely open type hierarchies, it is pretty rare that something needs to be extensible in that way outside of library development.
That doesn't mean inheritance is always bad, but the 'always inheritance' approach that is often championed in the OO world certainly is.
The goal is to write total functions [1] where every value in the domain can be mapped to a value in the co-domain. If that can't happen then your code is less predictable (throws exceptions or has undeclared side-effects). Having complete control of the states of any data-type is critical to making the approach easy to implement.
This breaks down in the etc. What if an appointment is not valid on them weekend, or a holiday, or when someone's child has soccer practice, or when that appointment conflicts with another one. There are all sorts of restriction that can be added and yes they technically cover some potential bug that someone technically could introduce. It may be hard to predict all of these restrictions ahead of time.
>Not really, they're open
At compile time 99% of the time they are closed. Languages also add features to let you make it closed.
>If that can't happen then your code is less predictable (throws exceptions or has undeclared side-effects).
This is impossible in the real world where hardware issues are real. Exceptions and crashes are okay and is just a fact of life when working with large software syshems. We know these systems are not going to be perfect and embrace it rather than trying to perfectly handle every single thing perfectly every time. It's accepting that these bugs exist and that processes should be put in place in identifying them and monitoring them so they can be fixed.
Total functions also don't allow for infinite loops which is common place in real systems that mare expected to potentially run forever instead of only being able to serve 1000 requests before exiting.
> This breaks down in the etc. What if an appointment is not valid on them weekend, or a holiday, or when someone's child has soccer practice, or when that appointment conflicts with another one.
Either the rules are an instrinsic part of an appointment or they're not. If you need something that represents exceptions to the state of a basic appointment then that's the job of the containing type (say a type called WeeklySchedule or something like that). Just like `Day` can be [1..31], but when used in a a `Date` then the constructor of `Date` wouldn't allow a `Day` of 31 and a `Month` of 2.
> At compile time 99% of the time they are closed. Languages also add features to let you make it closed.
You can't close an interface.
> This is impossible in the real world where hardware issues are real.
Exceptions in imperative-land tend to be used for all errors, not just exceptional ones, but expected ones. The problem is that no one knows what these side-effects are from the surface (i.e. a function prototype doesn't expose it).
Truly exceptional events should probably shut down the application (things like out-of-memory), or, as with Erlang, escalate to a supervision node that would restart the service.
Exceptions exist in functional languages too. However, expected errors, like 'file not found' and other application level 'could happen' errors should be caught at the point they are raised and represented in the type-system.
For example, a common pure pattern would be `Either l r = Left l | Right r` where the result is either a success (Right) or an alternative value (Left - usually used to mean 'failed').
When your function has this in its declaration, you are then aware of possible alternative paths your code could go down. This isn't known with exceptions.
For example the `parseInt` from C# and Haskell (below), one declares its side-effect upfront, then other doesn't. It may be semi-obvious that the C# one must throw an exception, but that isn't the only outcome - it could return `0`.
int parseInt(string x);
parseInt :: String -> Either Error Int
This confusion of what happens inside large blocks of code is exactly the issue I'm trying to highlight.
> Total functions also don't allow for infinite loops which is common place in real systems that mare expected to potentially run forever instead of only being able to serve 1000 requests before exiting.
In pure maths, sure, in reality yeah they do. Haskell has ⊥ [1] in its type-system for exactly this reason.
Anyway, you seem hellbent on shooting down this approach rather than engaging in inquisative discussion about something you don't know or understand. So, this will be my last reply in this thread. Feel free to have the last word as I'm not wasting more time explaining something that is literally about soundness in code and why it's valuable. If you don't get it now, you never will.
In the micro scale in terms of variables this is not the biggest problem that exists with programs. Enums don't somehow become garbage because it's technically possible to overwrite them with garage. The macro state of the entire system adds much more complexity and bugs. There is too many things that you have to keep in mind and systems are too big for one single person to understand it all. Pure functional still has this macro state. The state is stored in the program counter and by carrying along the current state with it.
If you break any programming task into fine enough steps it will feel like lego. It's nothing special. If I need to print out the numbers 1 through 10 there is a single canonical way my whole team could come up with independent from one another.
>I will sometimes go days or weeks without even running the code I'm writing because I just have confidence in it.
On a small scare this is possible but once you start working on a larger team or if you have to integrate your code with constantly changing systems you will want to be testing your code.
>[1] It's still possible to write logic errors, but they're the only major class of bugs you get really.
You can still make security bugs and performance bugs. Logic bugs is a giant category of bugs and are what most bugs belong to.