All high-level classes must depend only on Interfaces
-
I program since more than 40 years and develop software since more than 30 years. 15 years ago I heard first time about DI containers, and since 10 years I use IODA as principle to avoid DI containers. Only integration classes are allowed to call other operations. If you put the whole logic into operation-classes, with no logic in data-classes and integration-classes (only something like "rail-switches" in integrations), then separate input- and output- from logical operations, than you do not need to discuss those things like DI any longer. I use derivation very rarely, combination of classes is my favorite. But I use also actions and closures. You don't have to hide every operation class behind an interface, but it can be helpful. Consider: By using of interfaces you can not longer jump to the executed code by just pressing "F12" in Developer Studio. There is no universal answer to this question, it just depends on ... What is IODA? See here: https://www.infoq.com/news/2015/05/ioda-architecture/[^]
Fantastic and interesting reply. Thanks so much for sharing your experience here. Interesting that you specifically avoid DI.
Ralf Peine 2023 wrote:
By using of interfaces you can not longer jump to the executed code by just pressing "F12" in Developer Studio.
That is definitely one of the problems that I encounter with DI. It is a pain that the implementation is always somewhere else which you have to track down. Must say though, that I downloaded VSTudio 2023 to help with this and now it can (as you would hope) navigate to the exact implementation -- even if the code is in another dll it will use reflection to show you the code. Very good.
-
That's maybe the most important reason for DI. And your question is the perfect question to get the bottom of the mystery of whether or not people are really using DI. :thumbsup:
I've had a big hand in building about 30-40 microservices which all tend to more or less follow this convention. I do think "favor composition over inheritance" is a very strong win. I do not appreciate 3 levels of abstraction to do anything. But you get into CQRS and suddenly that's your world. We only have one that went real heavy on that and it's the one I probably hate the most. It was NOT appreciated that I should liken that to spaghetti code of old just on a newer plate. It doesn't help one bit that the typical way this is done obviates any simpler way of doing by design. Scope keywords are wielded as cudgels to keep you in line. The "simple" ctors - they're all internal/private - so do the abstractions or do nothing. I think if I wanted all of that I would use XML comments to highlight that *maybe* something should be done in the more complex of the ways but otherwise not design things in such a way to make it very difficult to do so simply (ditch all this scoping).
-
jschell wrote:
Does it provide actual value that offsets the complexity.
Yes, I believe it provides several benefits (at the cost of slightly increasing the size of the project and lines of code):
- Contractual obligation Interfaces define a contract that classes must adhere to. By requiring a class to implement an interface, you ensure that it provides specific functionalities or behaviors as defined by that interface. This promotes consistency and predictability in your codebase.
- Polymorphism Interfaces enable polymorphic behavior in C#. When a class implements an interface, instances of that class can be treated as instances of the interface. This allows for greater flexibility in designing systems where different objects can be used interchangeably based on their common interface.
- Code reusability By implementing interfaces, classes can share common functionality without being directly related in terms of inheritance. This promotes code reuse and modular design, as multiple classes can implement the same interface to provide similar behavior.
- Decoupling and DI Interfaces facilitate loose coupling between components. Code that depends on interfaces is not tied to specific implementations, making it easier to change or extend functionality without affecting other parts of the codebase. This also enables dependency injection, where objects are passed into a class via interfaces, allowing for easier testing and maintenance.
- Design patterns Interfaces are integral to many design patterns such as Strategy, Observer and Factory. Requiring classes to implement interfaces enables the use of these patterns, leading to more maintainable and scalable code.
- Documentation and readability Interfaces serve as documentation for the expected behavior of classes. When a class implements an interface, it's clear what functionality it provides without needing to inspect the implementation details ("what vs. how"). This improves code readability and makes it easier for devs to understand and work with the codebase.
/ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware
-
jschell wrote:
Far as I can tell you are still telling me about what is is supposed to do.
Sorry, I don't understand. Software engineering best practices don't magically do anything by themself. Developers have to use them correctly in order to benefit from them. Your statement is a bit like saying "Object oriented design has no benefits because it doesn't do what it's supposed to do." If you don't use object oriented programming principles correctly, you're not going to enjoy any of its benefits. It's the same with agile development practices (which IMHO very few organizations follow correctly). /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
Whoever gave that directive is a man after my own heart. It's extreme, to be sure. Realistic to literally follow 100% of the time? Probably not. But as an aspiration, a philosophy - absolutely. If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported - without a rewrite. Nuget packages, even entire application frameworks will come and go, yet your core code will be snug as a bug in a rug, wrapped in layers of abstraction that shield it from the chaos. When your favorite library is deprecated, revealed to have a critical vulnerability, or the vendor jacks up the price on you, you scoff at how simple it is to assign someone to find a replacement and write the wrapper layer - *completely independently of everyone else*. Your customer tells you the application you designed for Azure now needs to run on AWS? "No problem", you say, "give me a week." Microsoft decides to make 100 new breaking changes to ASP.NET Core? Bah! The upgrade takes an hour. You will never be stuck relying on proprietary technology outside of your control ever again. The term "technical debt" won't even be part of your vocabulary. So yes. Those who know, do this.
Spot on, Peter. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
Peter Moore - Chicago wrote:
If you do this, you will be able to grow and scale your products effortlessly for decades - basically for as long as the programming language you use is supported
And have you actually done that? I have worked on multiple legacy products and never seen anything like that. At a minimum I can't see it happening in any moderate to large business unless the following was true - Dedicated high level architect (at least director level) whose job is technical not marketing. The person enforces the design. - Same architect for a very long time. With perhaps a couple other architects trained solely by that individual. - Very strict controls on bringing in new idioms/frameworks. - Very likely extensive business requirements to support multiple different configurations. From the beginning. That would insure the initial design actually supports that. What I have seen is even in a company started with a known requirement to support multiple different implementations in rapid order (about a year) new hires decided to implement their own generalized interface on top of the original design without accounting for all the known (not hypothetical) variants. Making the addition of the newer variants into a kludge of code to fit on top of what the new hires did.
jschell wrote:
At a minimum I can't see it happening in any moderate to large business unless the following was true - Dedicated high level architect (at least director level) whose job is technical not marketing. The person enforces the design.
You make a good point. It takes an experienced technical team to lay down guidelines like these. Over the past 20 years I've worked mostly at early stage companies with very experienced small teams, each of which was tasked with implementing portions of a larger complex product. Because requirements are almost always less known early in a product's evolution, using the technique of enforcing interface definitions allows the code to naturally evolve as the requirements change and become more solidified. Coupled with a strict regimen of writing automated unit and integration tests, defensive programming designs like these increase the chances of developing a complex app with fewer bugs. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
If you don't DI on a large project, how are you doing unit/int tests? Sure, small project, whatever, but...
-
raddevus wrote:
how many developers really know that concept
I would have assumed devs with some experience would be aware of this. In our shop it's a given because you can't write a unit test with a mocked dependency without using this paradigm. :) It's also one of our pre-interview phone screen questions. There's another subtle aspect to this, though: when using MEF, you can encounter a run-time failure (error constructing a service class) when any dependency in the chain fails to construct because of a missing [Export] attribute on a class in the dependency hierarchy. I didn't want our devs to have to manually check for this so I wrote a tool that reflects the codebase and identifies these broken classes. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
"because you can't write a unit test with a mocked dependency without using this paradigm." An alternative is to use generic programming aka "static polymorphism", and inject dependencies via template parameters. No need for interfaces. Not saying this is a good choice, but it is certainly a choice.
-
"because you can't write a unit test with a mocked dependency without using this paradigm." An alternative is to use generic programming aka "static polymorphism", and inject dependencies via template parameters. No need for interfaces. Not saying this is a good choice, but it is certainly a choice.
That can leading to run time errors because you have to ensure you call the correct overload with the correctly mocked dependency for every method you want to test. It's safer to inject the required mocks (once) into a non-overloaded constructor of the system being tested, because those mocks are guaranteed to be used for all the methods being tested. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
quote
Create a facade around any API that you are using to protect your code from changes in the API.
A great idea that no one ever does. Ok, not no one, but it is done more rarely than it should be. Also, there are physical limitations to it. We use a 3rd party component that has 100s methods. We should wrap the component but it gonna take a while. :)
Why? How difficult is it to adopt the old semantics of a given dependency, and shim a a new replacement library when it becomes necessary to jump ship. I have done this a few times, though not often. Seems like deferring the pain ('YAGNI') until it becomes necessary is optimal overall.
-
Why? How difficult is it to adopt the old semantics of a given dependency, and shim a a new replacement library when it becomes necessary to jump ship. I have done this a few times, though not often. Seems like deferring the pain ('YAGNI') until it becomes necessary is optimal overall.
hpcoder2 wrote:
Seems like deferring the pain ('YAGNI') until it becomes necessary is optimal overall
Yeah, exactly. You can either have: 1) Pain now (All Interfaces) 2) Pain later (that may never occur) I figure take the pain later -- cause a lot of software rots for other reasons and is completely re-written anyways. So, you may never reach the "pain later" stage anyways. As a matter of fact, I've rarely seen it in 35 years of software development. And, when another manager comes in anyways, they think something totally different and wipe away the "old" code, even if it is extensible from all those Interfaces.
-
That can leading to run time errors because you have to ensure you call the correct overload with the correctly mocked dependency for every method you want to test. It's safer to inject the required mocks (once) into a non-overloaded constructor of the system being tested, because those mocks are guaranteed to be used for all the methods being tested. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
The compiler takes care of calling the correct overload. I really don't understand the problem. Pros of the generic solution: - no virtual function overhead Con: - the interface contract is more implicit Other than that, both approaches are about equally as complex and difficult to debug. Better if mocking is not used unless necessary.
-
The compiler takes care of calling the correct overload. I really don't understand the problem. Pros of the generic solution: - no virtual function overhead Con: - the interface contract is more implicit Other than that, both approaches are about equally as complex and difficult to debug. Better if mocking is not used unless necessary.
hpcoder2 wrote:
Better if mocking is not used unless necessary.
How would you unit test a service without mocking its dependencies? /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
jschell wrote:
Far as I can tell you are still telling me about what is is supposed to do.
Sorry, I don't understand. Software engineering best practices don't magically do anything by themself. Developers have to use them correctly in order to benefit from them. Your statement is a bit like saying "Object oriented design has no benefits because it doesn't do what it's supposed to do." If you don't use object oriented programming principles correctly, you're not going to enjoy any of its benefits. It's the same with agile development practices (which IMHO very few organizations follow correctly). /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
Which again is just stating how it is supposed to work. As I asked you in the very first post that I made ... Has anyone measured, objective measurements, how successful that is? You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
-
hpcoder2 wrote:
Better if mocking is not used unless necessary.
How would you unit test a service without mocking its dependencies? /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
Quite easily. Options include: 1. Black box testing - test the assembled class with its dependencies, based on whatever attributes are publicly visible. 90% of the time this is all that is needed. 2. White box testing - test the assembled class with its dependencies, but also declare internal state as protected, and have the test fixture inherit from the class being tested. 3. White box testing - instead of declaring the internal attributes protected, declare an internal class Test and make it friends with the class being tested. The actual implementation of the test class can be deferred to the unit test code. All of the above I have used in a unit testing environment, and are way simpler to understand, debug and otherwise maintain than dependency injected/mocked code. The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources.
-
Which again is just stating how it is supposed to work. As I asked you in the very first post that I made ... Has anyone measured, objective measurements, how successful that is? You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
jschell wrote:
You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
By measuring our sprint velocity and bug counts. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
Quite easily. Options include: 1. Black box testing - test the assembled class with its dependencies, based on whatever attributes are publicly visible. 90% of the time this is all that is needed. 2. White box testing - test the assembled class with its dependencies, but also declare internal state as protected, and have the test fixture inherit from the class being tested. 3. White box testing - instead of declaring the internal attributes protected, declare an internal class Test and make it friends with the class being tested. The actual implementation of the test class can be deferred to the unit test code. All of the above I have used in a unit testing environment, and are way simpler to understand, debug and otherwise maintain than dependency injected/mocked code. The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources.
IMHO, a member friendship violates Liskov.
hpcoder2 wrote:
The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources.
And that's often the case when testing enterprise systems that include several cooperating independent subsystems. That's the case at my shop. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
IMHO, a member friendship violates Liskov.
hpcoder2 wrote:
The only time mocking is really needed is when it is impractical to instantiate the dependency in the CI environment. Examples might include a full database, or something that depends on network resources.
And that's often the case when testing enterprise systems that include several cooperating independent subsystems. That's the case at my shop. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
I have no problem with the independent subsystems being mocked. There are relatively few of these. In examples I've seen, every single class implements an interface, and every interacting class is mocked, leading to triple the number of classes, and a nightmare to read and/or debug the code. Way too much! Re friendship violating Liskov, then so much the worse for Liskov. Friendship has its place and uses, but shouldn't be overused - just like global variables, mutable members and dependency injection.
-
I have no problem with the independent subsystems being mocked. There are relatively few of these. In examples I've seen, every single class implements an interface, and every interacting class is mocked, leading to triple the number of classes, and a nightmare to read and/or debug the code. Way too much! Re friendship violating Liskov, then so much the worse for Liskov. Friendship has its place and uses, but shouldn't be overused - just like global variables, mutable members and dependency injection.
hpcoder2 wrote:
I have no problem with the independent subsystems being mocked. There are relatively few of these.
Right. In our codebase, services tend to have at most about 3-4 dependencies (independent services).
hpcoder2 wrote:
In examples I've seen, every single class implements an interface
Ouch. I agree that's overkill. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
jschell wrote:
You are claiming that it is successful. Not that it could be but rather that it is. So how did you measure that?
By measuring our sprint velocity and bug counts. /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
Ravi Bhavnani wrote:
and bug counts.
You originally responded (quoted) to the following "All high-level classes must depend only on Interfaces" Are you claiming that interfaces and nothing else reduced bug counts? Versus and not the same as a large number of other code and process (not just coding) methods. And the sum total of those reduced bug counts? And what was your time period and measurement. So for example you started with no processes in place in Jan of 2021, and you measured your production bug rate then for the last year (to Jan of 2020.) Then you implemented the new processes and now your production bug rate is 50% less? Or 90% less? Specifically what are those numbers? (Might note that I spend 15 years doing significant/principle work in Process Control procedures so I am fact rather knowledgeable both in the theory and the practice and the reality of doing this.)