Hacker News new | past | comments | ask | show | jobs | submit login
Dependency Injection, Inversion of Control and the Dependency Inversion Principle (my-junk.info)
37 points by kasey_junk on April 12, 2015 | hide | past | favorite | 23 comments



All I can say is that when I write code in Clojure (with higher-order functions), I reorganize code fairly often in these sorts of ways without thinking about the "name" for it (e.g. dependency injection or inversion of control).

I'm less interested in what it is called in other languages (where it seems like you have to really 'work' at it). Instead, I just think about functions, namespaces, and good design; which to me comes down mostly to answering the question "What code is responsible for what functionality and why?"

This is not to fault an article like this, but I find it striking. It reminds me of the point you hear often (e.g. in Russ Olsen's book on Design Patterns in Ruby). Some languages (e.g. Ruby) make patterns (e.g. from Java) so easy that you don't have to think about the pattern any more.


I have a lot of sympathy for this way of thinking, especially given that so many of the Gang of Four patterns seem to be complicated ways to do higher-order functions.

That said, all languages have patterns and a shared vocabulary around concepts is essential to a profession. So I'd encourage you to become familiar with the nomenclature of the patterns in the idiom you are programming in.

Finally, the Dependency Inversion Principle is a principle, not a pattern and it is as applicable in clojure/lisp/et al as in Java. That is, instead of thinking of the problem from higher to lower in abstraction, the responsibilities should be organized around functionality and the abstraction of the dependency should live with the code that does the depending, not with the code that is depended upon.


I can't figure out the point of this article. Why does adding one more seemingly arbitrary layer of abstraction help anything?


TL,DR: Complex logic depends on data objects declared in your module. Instead of Logic => Dependency, refactor as Logic => Data + Data => Dependency. The code is organized as:

    // What kind of Data Logic processes.
    trait Data { ... }

    // How Data reads from a concrete Dependency
    class DataFromDependency extends Data { ... }

    // Logic
    class Logic(data: Data) { ... }
Whys:

* Trivial testing of the Logic, by instantiating Data objects with fake data. If you ever had to instantiate a DB and populate it with 132 right objects just to test that date conversions work correctly, you know what this means.

* The Data adaptors reduce the semantic surface of the Dependency to what Logic needs. This makes reading and reasoning about Logic easier, especially if Dependency is a "fully featured" library with tens of methods Logic couldn't care less about.


In Ruby this sort of thing comes naturally due to its much more flexible metaprogramming features than Java-like languages. Testing frameworks like RSpec and the various Gems that go with it for building mock objects make this very easy and common, without having to restructure the original code or have a bunch of nonobvious names for a bunch of patterns.

I have nothing against Java and still take work in Java, but I think every Java developer would learn a lot from Ruby-like languages.


If I read that correctly--and I most likely did not--it seems that he's making the argument that a misunderstanding of semantics and clear definitions lead to poor implementation of DI, IoC, and DIP. The final case seems to mostly address Java's flagrant mis/overuse of interfaces.

Like you, though, I struggle to find a clear message. Is this a complaint about things being done wrong? The solution presented around Dependency Inversion Principles was really tough to understand. It looks like he just made everything harder. It's been so long since I've done anything but front-end web work, though, that might just be me.


Thanks for the feedback. This was an article I wrote after having a few discussions with developers where people were confusing DI, IoC, and DIP (and IoC containers were thrown in as well). I posted it today as there were several other articles that were also mixing the topics up.

My hope was to make the differences more clear. Not accomplishing that was the more probable outcome ;)


Why should one not use setter methods?


http://en.wikipedia.org/wiki/Class_invariant

(above is a special case of below)

http://en.wikipedia.org/wiki/Design_by_contract

Alas, Eiffel looked like Modula (Pascal), rather than like C/C++, so it never caught on. The important part, though, was the idea of class invariants + method pre-conditions + method post-conditions. Meyer's book, from years before Java and "Java beans" infected the group-think, talked about classes that explicitly were constructed in a valid state (which also meshes well with the practice of immutability), and methods with explicit conditions of their requirements and what they were guaranteeing they would accomplish.

"Java beans" pretty much took these otherwise sound engineering principles and pissed all over them.

Rather than having a constructor that builds an object that you can then start using, you have to guess (unless documentation is very good, which it won't be) which setters must be called before "bean" is non-crap.

Yes, I'm bitter that such an obviously flawed practice became standard operating procedure, to the point that doing things right is viewed as suspect.


Setters basically send a signal to the developer that the dependency is "optional", since someone can create the object and simply not call the setter. Most of the time, the dependencies are assumed to be set in, and will blow up at run-time when they are attempted to be accessed.

By using constructor injection you prevent this from happening. Also, consider a case of refactoring. You have a class that is being created from a few places and add a new dependency via a setter to it. Compile your code, and it looks fine but actually isn't - you need to find all the places you create that object and provide the extra dependency. If you use constructor injection, you'd have a compile error (in a statically typed language) in all the places that need to be fixed.


If your objects require dependencies to do their jobs, it should be impossible to create objects without those dependencies. The objects' constructors should require them. Having setters is, arguably, fine (but probably unnecessary) as long as invalid objects can't be created.


Using dependency injection adds complexity. You have to reason about how the injected functionality can change.

If you use setter based injection (as opposed to constructor or parametric injection) then you also add the complexity of reasoning about when it can change.


Thank you, loved it. I think the DIP part could use a little more explanation, but the DI/IoC parts very much hit home with me, underscoring some points I have also been trying to get across to people.


Nice article, liked it. One more thing that should at least be mentioned so a newcomer could investigate further is the Builder pattern. People start using setters to set necessary dependencies (instead of using constructors) often when they have a lot of dependencies or when multiple different dependencies can be injected. This can be fatal when anyone besides the class creator is going to use it. So the builder pattern can really clear things up a bit and at least hide the messy things (if not making them clearer).


The author doesn't seem to understand where to use DI and where to use a Container.

DI is used in the context of library code. E.g. the main business logic code library. This allows you to Unit Test your business logic.

A Container is used at an application level, i.e. the application that uses the business logic library. Maybe you're using a web framework. It's then the web framework that has ownership of the container. I.e. framework + business logic = application.


>> There is a second form of Dependency Injection that uses setters instead of constructor injection. Do not use this form.

Why? Personal preference I guess? Constructor injection works fine for low level objects that only take 1 or 2 dependencies, but for higher level business objects that might have a wider dependency reach, constructor injection can become a nightmare.


That's a section of the article I could have probably fleshed out more broadly, but the original audience was largely made up of developers who rejected the entirety of DI and IoC, largely due to their past negative experiences with setter based IoC containers.

I've answered the problem with setter based DI elsewhere in these comments, but another way to say it is because you've now made the behavior of the system, not just the data, mutable state. Which is harder to reason about, harder to test, and harder to get right in concurrent contexts.

> for higher level business objects that might have a wider dependency reach, constructor injection can become a nightmare.

Again, I view this as a symptom of bad design. If you have a wide dependency reach, then using a DI container is only allowing you to increase the problem. Using setter DI, which normally compromises the design even more, to supposedly make the design better is suspect.

I think you would be better served by fixing your high level business objects to not have a wider dependency reach, and the dependency inversion principle is one tool with which you can do that.


Its normal for higher level business objects to have a wider dependency reach, these are the elements that facilitate the communication between lower level objects. The last thing you want to do is pass around a container as a dependency because you don't want your core business logic to have a dependency on a specific container (the container is a framework thing). Injecting a container would be a sign of bad design.


I didn't intend to imply you should inject an IoC container. Offhand I cannot think of an example of when that would be appropriate.

What I did intend to say is that having a wide dependency reach is a negative outcome, that is usually the result of bad design choices. That high level components have many low level dependencies is precise evidence that the dependency inversion principle has not been followed.

The entire point of the principle is to decouple high level components from low level ones and to flatten that hierarchy so that the components become peers that each own their own abstractions around dependencies.


>> There is a second form of Dependency Injection that uses setters instead of constructor injection. Do not use this form. >Why?

Not using setter injection eliminates a huge class of bugs that occur when people try to use objects that haven't had all of their dependencies injected yet:

    $user = new RegisteredUser();
    // Any code that touches $user here will be bad
    $user->setEmail($email);
> constructor injection can become a nightmare.

Which is why people use library to do it for them - an example for PHP https://github.com/rdlowrey/auryn.


Please excuse my ignorance, but what programming language did author use in examples?


Scala


Thank you very much!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: