OO Software design epiphany - it might not matter
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?
Well, if it has not changed in 10 years I would say it is general enough :-D I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.
-
fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?
Well, if it has not changed in 10 years I would say it is general enough :-D I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.
NelsonGoncalves wrote:
Usually because I decide to make a class for something that will only have one object instance.
Usually, when I see that a class is instantiated only once, like some sort of singleton, I always think it might be wise to drop the class and put everything in a namespace.
-
NelsonGoncalves wrote:
Usually because I decide to make a class for something that will only have one object instance.
Usually, when I see that a class is instantiated only once, like some sort of singleton, I always think it might be wise to drop the class and put everything in a namespace.
Yes, I agree. But I start with the "everything is an object" mentality, and only latter do I recognize that, for my applications, most of the times that is the wrong way to think. I could spend 1hr thinking on the design, but that would just prevent me from latter spending 2 days fixing bad design decisions ;P
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I know where you are coming from but would think *it depends* on what you are working on. some projects and cases may merit it and some not, and although i dont work in embedded would assume that it's less important. however as a 'business app guy' i find it great, however if i really looked at it i probably don't need it as much as i think i do, but it's the way i roll now and i quite like it :) GL
-
Got thrown into it about a year and a half ago. It was bound to happen, working in a company that has 80% automotive contracts.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
I work in the company that kind of created it, so avoiding it completely is not possible. :-D
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I do embedded firmware full time. We use all of the OO features you have mentioned. For the most part this has been done as a positive effort, ie it was worth the effort. But lately I think we have taken it too far and the code is too difficult to follow and debug and also our performance seems really bad. We are now in the process of going back (yet again) to profile, study and do performance measurements. But I think it is a case of having too much of a good thing. We have overused such features on an embedded design with limited memory and tight timing requirements.
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
My Opinion (Based on experience), Every implementation of Polymorphism, on every team that used it, goes sideways pretty quickly, and becomes quite chore to maintain. OOP principles are generally sound, but I have not even considered using Polymorphism in 20 years. Mainly because its harder to debug. BUG: "The Investor Customer is disturbing the customerChecking balance because they customer is also an investor" - Resulting in the loss of his mortgage. Keep It Simple, keep it moving.
-
Firmware developement follow different rules than software, there's no circling around it. Software should not depend on the underlying implementation details, firmware is the underlying implementation and it's all about details. Some things should be kept as agnostic as possible, e.g. the main state machine should not depend on the exact make of the various hardware components so interfaces to control hardware should be generic (i.e. peripheral_On, peripheral_Off, peripheral_Sleep, peripheral_Send...) but all the rest can not. One component may be turned on/off via the combination of two pins while another, identical on every aspect, may require timed pulses on a single pin and follow a protocol based on several outputs.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
Yeah. Reality bites. At some point we try to abstract away all the hardware bit stuffing, but it's still there ruining all the nice plans - all it needs is an auto resetting status bit when other bits in the byte are read to ruin everything.
-
You are quite correct, IMO. First, forget about the promise of re-use when it comes to objects. Nobody ever does anything in the real world twice exactly the same way to merit any re-use benefit. In order of importance, to me: 1. Encapsulation - keeps stuff organized 2. Interfaces - defines what the class is expected to implement. I also use empty interfaces simply to indicate that the class supports some other behavior. Could use attributes for that as well, but interfaces are sometimes more convenient when dealing with a collection of classes that all support the same thing and there are methods that operate on that, hence I can pass in "IAuditable", for example. 3. Inheritance/Abstraction - mostly useless, but there are times when I want to pull out common properties among a set of logical classes. Note that I don't consider this to be true abstraction, it's using inheritance to defined common properties and behaviors. 4. Polymorphism - useful, but less so now with optional default parameters that do the work of what one often used polymorphic methods for. IMO, the reality of "how useful is OO" falls quite short of the promise of OO.
Latest Articles:
Client-Side Type-Based Publisher/Subscriber, Exploring Synchronous, "Event-ed", and Worker Thread SubscriptionsI'd agree. Sounds like function interfaces with better, purposeful, naming of the why, rather than a focus on the what & how.
-
I do embedded firmware full time. We use all of the OO features you have mentioned. For the most part this has been done as a positive effort, ie it was worth the effort. But lately I think we have taken it too far and the code is too difficult to follow and debug and also our performance seems really bad. We are now in the process of going back (yet again) to profile, study and do performance measurements. But I think it is a case of having too much of a good thing. We have overused such features on an embedded design with limited memory and tight timing requirements.
Just wondering: Are you implementing a higher level design model provided by others, or doing the higher level design model yourself, or implementing the design idea directly? The source of the design often impacts the firmware implementation approach (i.e. if/how Model Based Systems Engineering MBSE is being done).
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
The real problem with OO Programming and possibly Design is that everyone knows the parts, but most don't put them together well. OO has worked well for years and it is visible at the low level. Consider the storage device. It can be a floppy (remember those?), a hard drive, an SSD, a write once, CD/DVD, an online storage, an so many more. But the block storage protocol is applied to all. the Driver converts the hardware to an object that responds to the same inputs no matter what is hidden behind the interface. The device is the object. The functionality is abstracted and hidden. The behavior is locked behind the wall. And the new types of devices are added invisibly to the higher levels of code by following the interface rules. Enhancements to the rules are added all the time by extending the interface for things that were not previously considered. The trick is in creating the specific, but only working with the generic. The failure is in creating specific necessary behavior at the derived level that cannot be used at the generic level. Hierarchy of Animal -> Quadruped -> Horse. Horse whinnies and gallops. But to implement them as Horse features means that they must be addressed as features of a horse. Instead, Animal Speaks(Friendly | Loudly | Fearfully ), Moves( Slowly | Quickly ) Now, we can say Animal->Speak(Friendly), Animal->Moves( Quickly). We can add Dog and a Cat and implement the interface described. Then we create the item and put it in Animal and use it without changing the code at all. The biggest trick of all, is to construct the Animal (or derived type) with as much of the descriptive information as possible. then use it with as little information as possible. Compare to the original Storage idea. the File Create and Open takes all kinds of descriptive information, but the actions on it take only the necessary variations. File->Read( howMuch, toWhere). File->Seek(toPosition, fromWhere). It isn't really the tenants of OO that are in question, it is the organization. Put them together and use them well and they provide for an easier path to expansion, adaptation and improvements. Print is a great example of Polymorphism. Print (integer) Print (string) Print (float) Print (format, arg1, arg2, arg3, ...) Print (Animal) OO is more than a Hammer. It is a whole Toolbox than can be used to build better tools and expandable structures.
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I actually have a complex example of when inheritance should have been used, but wasn't. We have a Posi-Pay application that basically takes check and deposit data from a bank and converts it to something we can use. The program has a 15k+ lines long switch statement, one case for each bank format. Much of the code is duplicated among case statements. The unique licensing code is in a separate area, but also is a giant switch statement. I long for the day when I'll be given the time to make each bank format a separate class, the base class would have all the conversion functions needed, and the switch statements become 1 line of code each: BankFormat.Convert() BankFormat.GetLicense() Bond Keep all things as simple as possible, but no simpler. -said someone, somewhere
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I agree. Behind the OO curtain lies a bunch of functions. I have made and used them, but always with my code running behind the scenes. I work 99% in Windows - How much simpler could this function be? function -> SEND_EMAIL(STRUCTURE)
-
Just wondering: Are you implementing a higher level design model provided by others, or doing the higher level design model yourself, or implementing the design idea directly? The source of the design often impacts the firmware implementation approach (i.e. if/how Model Based Systems Engineering MBSE is being done).
We build everything from ground up including custom ASICs. But we have a large development team, maybe in the 100's. But what seems to be happening as a trend is an over use of language features. Or in another perspective the solutions are over-engineered. I have seen developers resort to techniques that are perfectly fine in a PC application environment where you have Gigahertz processors and Gigs of RAM But in a custom hardened, real-time, constrained embedded systems the environment is different. While the tools will support C++ templates, all the OO features etc, there is an art to how to balance design freedom vs efficient execution. You can definitely use OO features in this environment, we have done it successfully before. But there is risk of going too far and again this is where experience comes into play.
-
I work in the company that kind of created it, so avoiding it completely is not possible. :-D
LOL so you probably know a friend of mine, he moved in Germany several years ago and he works there too :D
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
In embedded environments I'm not sure a lot of the OOD stuff makes sense. Encapsulation - definitely as it hides the intricacies of the hardware, but be careful you don't introduce performance penalties by doing so. Abstraction - goes along with encapsulation in this case with the same caveats. Inheritance - depends on the application. Don't use it just because it's available. I have used Inheritance successfully, but going more than two or three levels deep simply invites inefficiencies and unreadable code. Consider that most unreadable code is because the control flow is at the wrong level and then consider that inheritance is a form of control flow. Polymorphism - use it or don't use it as needed. In other words, you don't have to use all of OOD if it doesn't make sense. Use what makes sense.
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I still mainly like OO stuff (as a .NET/C# dev), but I can't say whether your project would benefit from the refactoring you considered. If your gut says no, then I believe you. I come across useful inheritance examples occasionally, so I don't understand inheritance hate. Like any language feature it can be abused. My examples are often abstract classes that therefore require another class to implement/inherit abstract methods. For example, I work on stuff that I want to work equally well in a local file system and Azure blob storage. There is some common behavior that goes in an abstract base class, followed by some subclass capabilities that are environment-specific. From the .NET BCL, Streams follow this pattern. Streams are abstract, with a gaggle of concrete implementations for different situations. I don't see what's wrong with that. Now, I've seen inheritance diagrams like the old C++ MFC stuff that were over-the-top complicated. I think they had reasons at the time, but that was then. I would say that most of the intellectual heavy lift in app development (in my world) is database design, data modeling -- understanding stakeholder requirements and translating them into relational models. I would agree this doesn't really fit an OO paradigm very well, and I don't see that it needs to. For example, I don't think inheritance translates very well in relational terms except in one narrow situation. I usually have base class to define user/timestamp columns like `DateCreated`, `CreatedBy` and so on -- and my model classes (tables) will inherit from that base class in order to get those common properties. But that's really it. In the app space, it seems like 80% of dev effort goes into building CRUD experiences (forms and list management) of one kind or another. The other 20% is "report creation" of some kind or another, in my world. I don't think there's a perfect distinction between the two, but I would agree there's not really any OO magic in this layer. OO doesn't really make this experience better, IMO. We do have endless battles over UI frameworks and ORM layers. (I'm getting behind Blazor in the web space, and I have my own ORM opinions for sure! Another topic!) I'm not sure why anyone would challenge the need for abstraction and encapsulation, but there certainly trade-offs and a need for balance. I like this talk (The Wet Codebase by Dan Abramov – Deconstruct[
-
LOL so you probably know a friend of mine, he moved in Germany several years ago and he works there too :D
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
Everybody in Germany knows someone who works there :-D
-
My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.
Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
Hello Charlie, i have also been working on embedded for some time and always wanted to apply (unsuccessfully) OOP approach to my designs, mostly because of lack of proper tools,in the last years i found python and my point of view has changed. at this moment i work for a company that makes battery chargers, most of them have been migrated from a totally analog control to a microcontrolled one, there were several embedded programmers involved, each one of them working on his own so you can imagine the kind of mess that exist in software. founding concepts of OOP you mention are powerful but, like any other tool, you have to choose the one that fits the best to your particular application. In my case i use a python (a GUI and some scripts on the PC side) to generate C++ code for embedded automatically, and OOP paradigm is the thing that glues all together. here the software changes frequently, and this approach allows those changes to be applied quickly. using the previous approach to modify an existing application usually takes several days of code reading in order to understand how it works and then apply the required changes. regards jdavila
-
We build everything from ground up including custom ASICs. But we have a large development team, maybe in the 100's. But what seems to be happening as a trend is an over use of language features. Or in another perspective the solutions are over-engineered. I have seen developers resort to techniques that are perfectly fine in a PC application environment where you have Gigahertz processors and Gigs of RAM But in a custom hardened, real-time, constrained embedded systems the environment is different. While the tools will support C++ templates, all the OO features etc, there is an art to how to balance design freedom vs efficient execution. You can definitely use OO features in this environment, we have done it successfully before. But there is risk of going too far and again this is where experience comes into play.
Sounds about right. The Object Orientation does remove the designer (software or hardware) from the lower level realities and limitations. If folks haven't had the experience of working within those limitations then they have a hard time designing for them. They have been set-up (oriented) to keep the problems behind them! I blame Matlab ;) ;)