All high-level classes must depend only on Interfaces
-
Ravi Bhavnani wrote:
That's one of the tenets of dependency injection because it makes for a testable and extensible design
Yep, the devs who know about this know it really is like that. It's a foundational idea of DI. Thanks for commenting. I'm curious about : 1. how many developers really know that concept 2. how many developers / shops actually use it. 3. how devs who work at shops where it is used, like or dislike it. The comments so far have been very interesting. Have you, by chance, read that MS Unity PDF that I referenced in my original post? It has some great info on DI, but it's so old and further along the examples just jump into extreme details of using the Unity container. Oy! they should've made smaller set of examples. That's actually what I was trying with my latest article. Thanks again for the conversation.
Our latest “fresh” developers think OOP is microservices… Cannot even refactor simple code: Original code did X 5 times, why does the new code do X 4 times? Even some of the more experienced group is too reliant on full solutions on StackOverflow, etc. They cannot take two topics and synthesize a unique solution.
-
It only makes sense to me when it can be more than one "is a". Other than that, it's another "ritual". And all you keep doing is going back and forth between (one) "implementation" and interface; until it's obvious one needs (or can benefit from) an interface. Then you also have to deal with the school that says "no inheritance"; which in essence means no "base methods"; virtual or otherwise. Another pointless ritual that only becomes "real" because someone "ordered" it; or can't decide when it is appropriate. See: "abstract" TextBoxBase.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
-
Ravi Bhavnani wrote:
That's one of the tenets of dependency injection because it makes for a testable and extensible design
Yep, the devs who know about this know it really is like that. It's a foundational idea of DI. Thanks for commenting. I'm curious about : 1. how many developers really know that concept 2. how many developers / shops actually use it. 3. how devs who work at shops where it is used, like or dislike it. The comments so far have been very interesting. Have you, by chance, read that MS Unity PDF that I referenced in my original post? It has some great info on DI, but it's so old and further along the examples just jump into extreme details of using the Unity container. Oy! they should've made smaller set of examples. That's actually what I was trying with my latest article. Thanks again for the conversation.
raddevus wrote:
how many developers really know that concept
I would have assumed devs with some experience would be aware of this. In our shop it's a given because you can't write a unit test with a mocked dependency without using this paradigm. :) It's also one of our pre-interview phone screen questions. There's another subtle aspect to this, though: when using MEF, you can encounter a run-time failure (error constructing a service class) when any dependency in the chain fails to construct because of a missing [Export] attribute on a class in the dependency hierarchy. I didn't want our devs to have to manually check for this so I wrote a tool that reflects the codebase and identifies these broken classes. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
I'm very interested in feedback on this. Yes, it's somewhat related to my latest article[^], but I'm going through what I explain below, right now.. What if you were going to design some new service or app and you were told:
Development Manger:
"All high-level classes must depend only on Interfaces."
What if I told you that and was entirely serious. Would you balk? or think, "Yes, that is the way it is and should be." After that your manager says,
Development Manager:
"Something else will decide how to build the implementation which will fulfill the Interfaces."
Would that sound normal to you, or completely crazy? Or somewhere in between? The Implications Do you honestly understand the implications? No Implementation Code One of the implications is that the code Service or App you are creating basically has no implementation code in it. (Or very little.) Why? Because your high-level app only depends on easily-replaceable Interfaces. That means if you want to see the implementation, you'll need to go to the Library (probably a separate project) which contains the implementation that is used to fulfill the Interface. How do you feel about that? Do you know how crazy it is to look at project that has been designed this way? Have you ever experience a project that is carried out like this? Why I'm Thinking About This Even More? I have just completed 50 pages (of a total of 241) of the very old book (2013) DependencyInjection With Unity (free PDF or EPUB at link)[^].
Whoever gave that directive is a man after my own heart. It's extreme, to be sure. Realistic to literally follow 100% of the time? Probably not. But as an aspiration, a philosophy - absolutely. If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported - without a rewrite. Nuget packages, even entire application frameworks will come and go, yet your core code will be snug as a bug in a rug, wrapped in layers of abstraction that shield it from the chaos. When your favorite library is deprecated, revealed to have a critical vulnerability, or the vendor jacks up the price on you, you scoff at how simple it is to assign someone to find a replacement and write the wrapper layer - *completely independently of everyone else*. Your customer tells you the application you designed for Azure now needs to run on AWS? "No problem", you say, "give me a week." Microsoft decides to make 100 new breaking changes to ASP.NET Core? Bah! The upgrade takes an hour. You will never be stuck relying on proprietary technology outside of your control ever again. The term "technical debt" won't even be part of your vocabulary. So yes. Those who know, do this.
-
I'm very interested in feedback on this. Yes, it's somewhat related to my latest article[^], but I'm going through what I explain below, right now.. What if you were going to design some new service or app and you were told:
Development Manger:
"All high-level classes must depend only on Interfaces."
What if I told you that and was entirely serious. Would you balk? or think, "Yes, that is the way it is and should be." After that your manager says,
Development Manager:
"Something else will decide how to build the implementation which will fulfill the Interfaces."
Would that sound normal to you, or completely crazy? Or somewhere in between? The Implications Do you honestly understand the implications? No Implementation Code One of the implications is that the code Service or App you are creating basically has no implementation code in it. (Or very little.) Why? Because your high-level app only depends on easily-replaceable Interfaces. That means if you want to see the implementation, you'll need to go to the Library (probably a separate project) which contains the implementation that is used to fulfill the Interface. How do you feel about that? Do you know how crazy it is to look at project that has been designed this way? Have you ever experience a project that is carried out like this? Why I'm Thinking About This Even More? I have just completed 50 pages (of a total of 241) of the very old book (2013) DependencyInjection With Unity (free PDF or EPUB at link)[^].
raddevus wrote:
Would you balk? or think, "Yes, that is the way it is and should be."
Depends. If one complex layer (A) is dependent on another complex layer (B) then unit testing A becomes quite a bit more difficult if B does not provide and interface.
raddevus wrote:
Do you know how crazy it is to look at project that has been designed this way?
Designing general solutions based on one implementation will fail. The general solution will encapsulate all of the assumptions about the single implementation. So it achieves nothing in terms of generalization. Even when multiple implementations are known it requires rigorous oversight to insure that someone doesn't attempt to generalize a subsection of the implementations. They end up doing the same thing - implementing based on just the subsection.
-
Whoever gave that directive is a man after my own heart. It's extreme, to be sure. Realistic to literally follow 100% of the time? Probably not. But as an aspiration, a philosophy - absolutely. If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported - without a rewrite. Nuget packages, even entire application frameworks will come and go, yet your core code will be snug as a bug in a rug, wrapped in layers of abstraction that shield it from the chaos. When your favorite library is deprecated, revealed to have a critical vulnerability, or the vendor jacks up the price on you, you scoff at how simple it is to assign someone to find a replacement and write the wrapper layer - *completely independently of everyone else*. Your customer tells you the application you designed for Azure now needs to run on AWS? "No problem", you say, "give me a week." Microsoft decides to make 100 new breaking changes to ASP.NET Core? Bah! The upgrade takes an hour. You will never be stuck relying on proprietary technology outside of your control ever again. The term "technical debt" won't even be part of your vocabulary. So yes. Those who know, do this.
-
We try to follow this principle. Even if you do something like: IProvider thing = new RealProvider(); It makes you plan in terms of the consumer. If you end up with multiple providers later, switching to IOC is really easy. Or substituting a mock for testing, or writing tests based on IProvider, or etc And of course, all the high level code that is calling the interfaces is “real” code. If your IDE can’t show you all of the implementors of the interface in milliseconds, then try a better IDE.
englebart wrote:
If you end up with multiple providers later,
That however is the problem. First of course it assumes that that is realistically even possible. Second it assumes that the generalized interface will actually be abstract. So a different provider can be put in. Databases are an excellent example of this. They do in fact change on occasion. But excluding very simple usages I have never seen this go smoothly or quickly. I saw one company that specifically claimed (marketing) that their product was database agnostic and yet I saw the following 1. The product could not meet performance goals by probably at least an order of magnitude. 2. Their 'schema' was obviously generated in such a way that it would make any DBA tasked with supporting it not happy at all (as the DBA I talked to reported.) 3. They spent months on site trying to get it to work correctly and fast enough to be even possible for the company to use it. Still working on it when I left the company.
-
raddevus wrote:
All high-level classes must depend only on Interfaces
That's one of the tenets of dependency injection because it makes for a testable and extensible design. We follow that guideline at my shop. (Apologies if I misunderstood your comment.) /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
Ravi Bhavnani wrote:
That's one of the tenets of dependency injection because it makes for a testable and extensible design.
Those are buzz words however. It is like saying that the code should be 'readable'. Has anyone measured, objective measurements, how successful that is? How do you create a design that is 'extensible' when you do not know what business will be like in 5 years? Or 20? What are you testing exactly? How do you measure it? Are bugs in production compared to those in QA and those in development? Does your testing cover not only simple unit testing but complex scenarios? What above fail over testing? What about production (not QA) testing? Do you have actual injection scenarios that test different scenarios. This is possible in certain situations such as in performance testing specific code. But it must be plan for and then actually used in an ongoing way.
-
I do like the rule of no concrete super/base classes. One concrete type extending another concrete type always causes grief down the road when someone adds a third concrete type into the mix.
-
englebart wrote:
If you end up with multiple providers later,
That however is the problem. First of course it assumes that that is realistically even possible. Second it assumes that the generalized interface will actually be abstract. So a different provider can be put in. Databases are an excellent example of this. They do in fact change on occasion. But excluding very simple usages I have never seen this go smoothly or quickly. I saw one company that specifically claimed (marketing) that their product was database agnostic and yet I saw the following 1. The product could not meet performance goals by probably at least an order of magnitude. 2. Their 'schema' was obviously generated in such a way that it would make any DBA tasked with supporting it not happy at all (as the DBA I talked to reported.) 3. They spent months on site trying to get it to work correctly and fast enough to be even possible for the company to use it. Still working on it when I left the company.
I stopped reading at “(marketing)”. 😊 Marketing still received their bonus? Database compatibility layers are a whole different ball of yarn. Rarely do I ever have a second implementation, but I still like designing to the interface. (and keeping all dependency graphs one way)
-
raddevus wrote:
how many developers really know that concept
I would have assumed devs with some experience would be aware of this. In our shop it's a given because you can't write a unit test with a mocked dependency without using this paradigm. :) It's also one of our pre-interview phone screen questions. There's another subtle aspect to this, though: when using MEF, you can encounter a run-time failure (error constructing a service class) when any dependency in the chain fails to construct because of a missing [Export] attribute on a class in the dependency hierarchy. I didn't want our devs to have to manually check for this so I wrote a tool that reflects the codebase and identifies these broken classes. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
I'm very interested in feedback on this. Yes, it's somewhat related to my latest article[^], but I'm going through what I explain below, right now.. What if you were going to design some new service or app and you were told:
Development Manger:
"All high-level classes must depend only on Interfaces."
What if I told you that and was entirely serious. Would you balk? or think, "Yes, that is the way it is and should be." After that your manager says,
Development Manager:
"Something else will decide how to build the implementation which will fulfill the Interfaces."
Would that sound normal to you, or completely crazy? Or somewhere in between? The Implications Do you honestly understand the implications? No Implementation Code One of the implications is that the code Service or App you are creating basically has no implementation code in it. (Or very little.) Why? Because your high-level app only depends on easily-replaceable Interfaces. That means if you want to see the implementation, you'll need to go to the Library (probably a separate project) which contains the implementation that is used to fulfill the Interface. How do you feel about that? Do you know how crazy it is to look at project that has been designed this way? Have you ever experience a project that is carried out like this? Why I'm Thinking About This Even More? I have just completed 50 pages (of a total of 241) of the very old book (2013) DependencyInjection With Unity (free PDF or EPUB at link)[^].
-
Another corollary that a comment triggered,
Quote:
Create a facade around any API that you are using to protect your code from changes in the API.
quote
Create a facade around any API that you are using to protect your code from changes in the API.
A great idea that no one ever does. Ok, not no one, but it is done more rarely than it should be. Also, there are physical limitations to it. We use a 3rd party component that has 100s methods. We should wrap the component but it gonna take a while. :)
-
quote
Create a facade around any API that you are using to protect your code from changes in the API.
A great idea that no one ever does. Ok, not no one, but it is done more rarely than it should be. Also, there are physical limitations to it. We use a 3rd party component that has 100s methods. We should wrap the component but it gonna take a while. :)
-
Ravi Bhavnani wrote:
That's one of the tenets of dependency injection because it makes for a testable and extensible design.
Those are buzz words however. It is like saying that the code should be 'readable'. Has anyone measured, objective measurements, how successful that is? How do you create a design that is 'extensible' when you do not know what business will be like in 5 years? Or 20? What are you testing exactly? How do you measure it? Are bugs in production compared to those in QA and those in development? Does your testing cover not only simple unit testing but complex scenarios? What above fail over testing? What about production (not QA) testing? Do you have actual injection scenarios that test different scenarios. This is possible in certain situations such as in performance testing specific code. But it must be plan for and then actually used in an ongoing way.
jschell wrote:
Those are buzz words however. It is like saying that the code should be 'readable'.
Requiring dependencies be defined as interfaces simply means their implementations can be changed at any time, as long as they adhere to the contract of the interface. That makes it possible to inject mocks (for testing) and improve/extend the functionality of a dependency without having to rewrite the consumer. It's basic software engineering, not rocket science or a buzz word. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
Sorry, the code is owned by my company so can't be shared. :( /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
Whoever gave that directive is a man after my own heart. It's extreme, to be sure. Realistic to literally follow 100% of the time? Probably not. But as an aspiration, a philosophy - absolutely. If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported - without a rewrite. Nuget packages, even entire application frameworks will come and go, yet your core code will be snug as a bug in a rug, wrapped in layers of abstraction that shield it from the chaos. When your favorite library is deprecated, revealed to have a critical vulnerability, or the vendor jacks up the price on you, you scoff at how simple it is to assign someone to find a replacement and write the wrapper layer - *completely independently of everyone else*. Your customer tells you the application you designed for Azure now needs to run on AWS? "No problem", you say, "give me a week." Microsoft decides to make 100 new breaking changes to ASP.NET Core? Bah! The upgrade takes an hour. You will never be stuck relying on proprietary technology outside of your control ever again. The term "technical debt" won't even be part of your vocabulary. So yes. Those who know, do this.
Peter Moore - Chicago wrote:
If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported
And have you actually done that? I have worked on multiple legacy products and never seen anything like that. At a minimum I can't see it happening in any moderate to large business unless the following was true - Dedicated high level architect (at least director level) whose job is technical not marketing. The person enforces the design. - Same architect for a very long time. With perhaps a couple other architects trained solely by that individual. - Very strict controls on bringing in new idioms/frameworks. - Very likely extensive business requirements to support multiple different configurations. From the beginning. That would insure the initial design actually supports that. What I have seen is even in a company started with a known requirement to support multiple different implementations in rapid order (about a year) new hires decided to implement their own generalized interface on top of the original design without accounting for all the known (not hypothetical) variants. Making the addition of the newer variants into a kludge of code to fit on top of what the new hires did.
-
I appreciate the feedback. You're summary is a good one and related to one of the key points made by the MS Unity Application Block PDF that I read, right after my original post.
Quote:
When You Shouldn’t Use Dependency Injection Dependency injection is not a silver bullet. There are reasons for not using it in your application, some of which are summarized in this section. • Dependency injection can be overkill in a small application, introducing additional complexity and requirements that are not appropriate or useful. • In a large application, it can make it harder to understand the code and what is going on because things happen in other places that you can’t immediately see, and yet they can fundamentally affect the bit of code you are trying to read. There are also the practical difficulties of browsing code like trying to find out what a typical implementation of the ITenantStore interface actually does. This is particularly relevant to junior developers and developers who are new to the code base or new to dependency injection.
It's interesting because "theoretically" I absolutely love the idea of DI, IoC and writing everything to an Interface. But, if you like to look at code, it is quite terrible. There's a project where this has been carried out that is a small(ish) project which has about 8 dependencies (all are Interfaces & the implementations are in separate DLL projects). To look at the code or debug-step the code you need to create a VStudio solution with the 8 projects as included Projects and then you can step into the code of one or the other. It's a lot of overhead. And, yes, I agree with what you said about team size too. If you have nine different people working on each item (1 on the main service and 8 on each dependency) then breaking up is good.
-
If you don't DI on a large project, how are you doing unit/int tests? Sure, small project, whatever, but...
-
jschell wrote:
Those are buzz words however. It is like saying that the code should be 'readable'.
Requiring dependencies be defined as interfaces simply means their implementations can be changed at any time, as long as they adhere to the contract of the interface. That makes it possible to inject mocks (for testing) and improve/extend the functionality of a dependency without having to rewrite the consumer. It's basic software engineering, not rocket science or a buzz word. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
Ravi Bhavnani wrote:
It's basic software engineering, not rocket science or a buzz word.
Sigh...yes I understand how it is supposed to work. I also understand in detail how Agile, Waterfall, project planning, work life balance and even designs are supposed to work. The question is not about what should happen but whether it is actually happening. Does it provide actual value that offsets the complexity.