This reminds me a lot of Gary Bernhardt's Boundaries talk and the associated idea of "functional core, imperative shell". For anyone who found this article interesting, you might also like this: https://www.destroyallsoftware.com/talks/boundaries
Thanks for sharing that. That's a great link, and I agree, they are talking about the same thing and reaching pretty much the same conclusion. The talk is well worth checking out.
Very good article. If you think about it, you will see this pattern in many places. Simply because it is the outcome of emphasizing pure functions.
For example, it's the heart of the virtual DOM in React. Pure functions create an entirely new (no mutation) virtual DOM, and then something at the "impure boundary" applies this to the actual, messy, mutable DOM of the browser.
With a little thought you can probably find several other well-known examples.
It's interesting -- this is not what I generally end up using DI for. The point of DI, a lot of the time, is to put seams into the program where you can test units of a limited size. If you don't have some way of injecting behavior, you end up having some serious trouble when you have a module that brings together the behavior of many sub-modules (how do you test it?). That's something these short code example style articles never seem to capture for me.
Rather than simply condemning you, since I have had similar thoughts, I think the answer is "it depends".
Testing every teeny tiny piece of a program is just stupid, and leads to more test code than product code, to no real benefit. (Java bean [blech!] getter/setter pairs, I'm looking at you!) ... Particularly very low level details that may end up being thrown out tomorrow morning after you rethink the problem.
OTOH, it's not a bad idea to make sure you test [most] every path, somehow.
At some point, though, sub-assemblies of larger apps are complicated enough to make testing them prudent. I'm not too proud to simply read an environment variable for things such as a server address/socket, though, rather than insisting on D/I. Other times, you gotta do what you gotta do, with complicated mocks and some kind of D/I.
Make sure requirements get tested, but not all [trivial] infrastructure really warrants the make-work.
1. Speed of automated tests. 2. Often it is good to test the semantics of a sub system are solid before integrating it so that edge cases are routed out and at the very least it's easy to find where a break occurred.
3. You might have external dependencies that are difficult or impossible to duplicate in your testing environment. Yes, stuff like this exists -- e.g. a medium-size retail bank will have at least a dozen, typically multiple-dozens of external contractors, each with their own APIs and testing environments.
This is the only answer that speaks to me. Although I would think most often you would have such dependencies be API calls over network. It would me more prudent in such case to create a mock server rather than use DI. But yeah, in general API from a third party might call for DI. Thanks.
So what if a dev home brews parser for instance, and you use that to read in files from a legacy system via ftp. You wouldn't test the parser code on its own? You test it all end to end?
Best case, you should build both integration tests (the entire system tested together) and unit tests (tests for individual pieces of the system isolated from others). The two types of tests will reveal different flaws in your code. Also, since it's difficult to write unit tests for tightly-coupled code, having a battery of unit tests can also help guide you towards a better overall architecture.
If you're going to write just one kind of tests, you should write integration tests. But, if you have the time to write both, they will pay dividends.
1. This is overly simplistic. What happens if your IO and logic are by necessity interleaved? Grab X out of DB, grab Y or Z out of DB depending on X's value, etc.? The whole thing just reeks of "ideal case".
2. This is overly complex. All that really needs to be said here is "pull out your pure code when possible". There's nothing special about F# to enable that. The logic in "tryAcceptComposition" is just a function calling other functions; you can do that in C# or even C. The only advantage F# adds here is the piping syntax, which to me only serves to make the code more obtuse. But I guess you couldn't write a three-part series about a single "extract pure function" op.
(This brings up an interesting thought: ReSharper should come up with a way to let you highlight a function and extract the "obviously pure" tidbits automatically).
I'll have to learn more about free monads. In general I love F# for doing domain modeling and logic, but I still find OO-style DI better for organizing "services". I've followed ploeh and scott wlaschin for some time and all my attempts to use their DI concepts in my own real-world code have led to code that's less intelligible than IoC with no tangible benefit. It's not for lack of trying, and I think not for lack of intelligence. It just never worked for me.
If free monads could provide something better than standard DI, and (and this is a big caveat) still retain decent editor integration (autocomplete, go-to-declaration/implementation), then I'd check it out. But my gut feeling says that it'll end up being a leaky abstraction that will need undue patching up just to maintain it.
F# uses .NET classes and objects for a module system, so your use of Objects for "services" is not surprising. An OCaml programmer is much less likely to miss Objects and DI frameworks, as OCaml has a powerful module system (i.e. module functors).
Free Monads can reify an effectful computation, giving flexibility on how it is interpreted. But they are not really a substitute for a good module system.
Okay I think I get free monads now. They seem pretty awesome, essentially letting you plug in an interpreter for the function you're going to run. I can see high-level how this would appear to be a great generic DI option--you set up an "interpreter" to handle the statements in your function however you want: for reals, for test, for reals with logging, etc. And automagically everything gets executed exactly how you want with no additional cruft.
What makes me cautious about the concept though is that e.g. `do_x_and_y()` would be interpreted differently than `do_x(); do_y()`, even if they were fundamentally the same. While "so what?" is a perfectly valid response, that little tidbit just makes me feel like, while FM's are a very cool abstraction for something, it's not really ideal for DI. It's just something meant for a different level. The article "The Wrong Abstraction" comes to mind.
> ... you set up an "interpreter" to handle the statements in your function however you want: for reals, for test, for reals with logging, etc.
That is the tip of the iceberg of free monads. Their full power lies in being able to combine different type sof effects into more powerful, composed effects. E.g. you want to do IO while also processing probability distributions using a probability monad. But they can get pretty hairy. See https://youtu.be/qaAKRxO21fU for the gory details.
The monad laws guarantee that there's no difference between do_x(); do_y(); and do_x_and_y(); where the latter is defined as { do_x(); do_y() }. In fact, monads would be pretty useless if that were not the case.
Out of curiosity, OCaml also has traditional classes/interfaces/objects, right? If so, then how do you decide when to use those versus module functors?
Modules and functors are able to contain type definitions, while classes/objects are not. This makes modules practically much more useful for abstraction.
AFAIK no one uses Objects in OCaml as the module system is sufficiently powerful. The Mirage project is a good example of using OCaml module functors to specialise components.
It's not any lack on your part. F# doesn't really support the really helpful abstraction techniques like parameterised modules or typeclasses. You can roll your own typeclasses using just simple records containing functions. It's easy, idiomatic, and it works statically. E.g., imagine you have a users 'service', with operations 'get by ID', 'add', and 'rename':
(** Type-safe IDs using a phantom type. *)
module Id =
type 'a t = private T of uint64
let of_uint64 u = T u
let to_uint64 (T u) = u
(** A domain type. *)
module User =
type t = private { id : t Id.t; name : string; age : int }
let make uid name age : t = ...
...
(** Users service typeclass. *)
module User_service =
type t =
{ get_by_id : User.t Id.t -> User.t Async
add : User.t -> unit Async
rename : string -> User.t Id.t -> unit Async }
let db : t =
{ get_by_id = fun uid -> ...
add = fun u -> ...
rename = fun name uid -> ... }
let test : t =
Now, injecting a user service dependency into any function is equivalent to passing in a parameter of type `User_service.t`.
As for free monads, I don't think F# will make them easy. If you notice, one commenter in that GitHub discussion mentioned they were doing a lot of copy-pasting to implement FMs. Imho that's a bad sign.
I like the idea of partial application, but in practice I often find it more trouble than it's worth. I find myself spending too much time worrying about "which order to declare function params to allow for greatest utilization of partial application in these different contexts", which ultimately is orthogonal to "write this function, call this function, check in my code", and often makes the code's intention less apparent rather than moreso.
I like it in certain purely functional data crunching routines, where it is idiomatic and can often make the code more generic and more intentional, but I've never had much luck doing DI this way. YMMV.
Wrong language, but here is an example of a mechanism to partially apply the trailing arguments, rather than the more customary leading arguments: http://ramdajs.com/docs/#partialRight
And finally, if something in the middle is the thing that needs to be nailed down, you can simply write a one liner function to provide the fixed thing and pass in the rest. (not really PFA any more at that point, but it will get the job done)
The key part about why dependency injection with partial application is not functional:
> When you inject impure operations into an F# function, that function becomes impure as well. Dependency injection makes everything impure, which explains why it isn't functional.
It seems that in the end the initial function Post(ReservationRequestDto dto) has to call the tryAcceptComposition which has its dependencies hard coded there. So how exactly this solved the issue?
If you pass the dependencies as parameters in the tryAcceptComposition function, the Post would have to know its dependencies and we would be back to the initial state.
I would like to know the whole example he showed before using this model to see how this scale for more than one function.