Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. OO Software design epiphany - it might not matter

OO Software design epiphany - it might not matter

Scheduled Pinned Locked Moved The Lounge
oophardwarehelpquestiondesign
53 Posts 33 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • P PhilipOakley

    Just wondering: Are you implementing a higher level design model provided by others, or doing the higher level design model yourself, or implementing the design idea directly? The source of the design often impacts the firmware implementation approach (i.e. if/how Model Based Systems Engineering MBSE is being done).

    E Offline
    E Offline
    Ed Korsberg
    wrote on last edited by
    #35

    We build everything from ground up including custom ASICs. But we have a large development team, maybe in the 100's. But what seems to be happening as a trend is an over use of language features. Or in another perspective the solutions are over-engineered. I have seen developers resort to techniques that are perfectly fine in a PC application environment where you have Gigahertz processors and Gigs of RAM But in a custom hardened, real-time, constrained embedded systems the environment is different. While the tools will support C++ templates, all the OO features etc, there is an art to how to balance design freedom vs efficient execution. You can definitely use OO features in this environment, we have done it successfully before. But there is risk of going too far and again this is where experience comes into play.

    P 1 Reply Last reply
    0
    • R Rage

      I work in the company that kind of created it, so avoiding it completely is not possible. :-D

      Do not escape reality : improve reality !

      D Offline
      D Offline
      den2k88
      wrote on last edited by
      #36

      LOL so you probably know a friend of mine, he moved in Germany several years ago and he works there too :D

      GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

      R 1 Reply Last reply
      0
      • C charlieg

        My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

        Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

        O Offline
        O Offline
        obermd
        wrote on last edited by
        #37

        In embedded environments I'm not sure a lot of the OOD stuff makes sense. Encapsulation - definitely as it hides the intricacies of the hardware, but be careful you don't introduce performance penalties by doing so. Abstraction - goes along with encapsulation in this case with the same caveats. Inheritance - depends on the application. Don't use it just because it's available. I have used Inheritance successfully, but going more than two or three levels deep simply invites inefficiencies and unreadable code. Consider that most unreadable code is because the control flow is at the wrong level and then consider that inheritance is a form of control flow. Polymorphism - use it or don't use it as needed. In other words, you don't have to use all of OOD if it doesn't make sense. Use what makes sense.

        1 Reply Last reply
        0
        • C charlieg

          My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

          Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

          A Offline
          A Offline
          Adam ONeil Travelers Rest SC
          wrote on last edited by
          #38

          I still mainly like OO stuff (as a .NET/C# dev), but I can't say whether your project would benefit from the refactoring you considered. If your gut says no, then I believe you. I come across useful inheritance examples occasionally, so I don't understand inheritance hate. Like any language feature it can be abused. My examples are often abstract classes that therefore require another class to implement/inherit abstract methods. For example, I work on stuff that I want to work equally well in a local file system and Azure blob storage. There is some common behavior that goes in an abstract base class, followed by some subclass capabilities that are environment-specific. From the .NET BCL, Streams follow this pattern. Streams are abstract, with a gaggle of concrete implementations for different situations. I don't see what's wrong with that. Now, I've seen inheritance diagrams like the old C++ MFC stuff that were over-the-top complicated. I think they had reasons at the time, but that was then. I would say that most of the intellectual heavy lift in app development (in my world) is database design, data modeling -- understanding stakeholder requirements and translating them into relational models. I would agree this doesn't really fit an OO paradigm very well, and I don't see that it needs to. For example, I don't think inheritance translates very well in relational terms except in one narrow situation. I usually have base class to define user/timestamp columns like `DateCreated`, `CreatedBy` and so on -- and my model classes (tables) will inherit from that base class in order to get those common properties. But that's really it. In the app space, it seems like 80% of dev effort goes into building CRUD experiences (forms and list management) of one kind or another. The other 20% is "report creation" of some kind or another, in my world. I don't think there's a perfect distinction between the two, but I would agree there's not really any OO magic in this layer. OO doesn't really make this experience better, IMO. We do have endless battles over UI frameworks and ORM layers. (I'm getting behind Blazor in the web space, and I have my own ORM opinions for sure! Another topic!) I'm not sure why anyone would challenge the need for abstraction and encapsulation, but there certainly trade-offs and a need for balance. I like this talk (The Wet Codebase by Dan Abramov – Deconstruct[

          1 Reply Last reply
          0
          • D den2k88

            LOL so you probably know a friend of mine, he moved in Germany several years ago and he works there too :D

            GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

            R Offline
            R Offline
            Rage
            wrote on last edited by
            #39

            Everybody in Germany knows someone who works there :-D

            Do not escape reality : improve reality !

            1 Reply Last reply
            0
            • C charlieg

              My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

              Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

              D Offline
              D Offline
              davila a k a Member 14909501
              wrote on last edited by
              #40

              Hello Charlie, i have also been working on embedded for some time and always wanted to apply (unsuccessfully) OOP approach to my designs, mostly because of lack of proper tools,in the last years i found python and my point of view has changed. at this moment i work for a company that makes battery chargers, most of them have been migrated from a totally analog control to a microcontrolled one, there were several embedded programmers involved, each one of them working on his own so you can imagine the kind of mess that exist in software. founding concepts of OOP you mention are powerful but, like any other tool, you have to choose the one that fits the best to your particular application. In my case i use a python (a GUI and some scripts on the PC side) to generate C++ code for embedded automatically, and OOP paradigm is the thing that glues all together. here the software changes frequently, and this approach allows those changes to be applied quickly. using the previous approach to modify an existing application usually takes several days of code reading in order to understand how it works and then apply the required changes. regards jdavila

              1 Reply Last reply
              0
              • E Ed Korsberg

                We build everything from ground up including custom ASICs. But we have a large development team, maybe in the 100's. But what seems to be happening as a trend is an over use of language features. Or in another perspective the solutions are over-engineered. I have seen developers resort to techniques that are perfectly fine in a PC application environment where you have Gigahertz processors and Gigs of RAM But in a custom hardened, real-time, constrained embedded systems the environment is different. While the tools will support C++ templates, all the OO features etc, there is an art to how to balance design freedom vs efficient execution. You can definitely use OO features in this environment, we have done it successfully before. But there is risk of going too far and again this is where experience comes into play.

                P Offline
                P Offline
                PhilipOakley
                wrote on last edited by
                #41

                Sounds about right. The Object Orientation does remove the designer (software or hardware) from the lower level realities and limitations. If folks haven't had the experience of working within those limitations then they have a hard time designing for them. They have been set-up (oriented) to keep the problems behind them! I blame Matlab ;) ;)

                1 Reply Last reply
                0
                • C charlieg

                  My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                  Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                  R Offline
                  R Offline
                  Rusty Bullet
                  wrote on last edited by
                  #42

                  Practical, pragmatic feedback is that you use what works. Polymorphism and inheritance aren't needed for everything, but they can make some things easier to do. If an application has no polymorphism or explicit inheritance, does that mean it is not object oriented? It depends. Unlike a language where the advanced concepts may not be useful for what you are doing, you can have an object-based application that otherwise follows object-oriented design without polymorphism. Does it matter that you call it object oriented? Too many software development concepts are treated more like a religion than a tool set to accomplish a goal. I currently work in an application where inheritance is used to an extreme and it works well. I don't even think about it, but I can see how it could be written completely differently without inheritance and still work. I wouldn't want to try though.

                  1 Reply Last reply
                  0
                  • C charlieg

                    My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                    Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                    C Offline
                    C Offline
                    Chris Boss
                    wrote on last edited by
                    #43

                    I am the exact opposite. Don't use OOP and don't plan on using it unless I have to. Why ? I am an experienced WIN32 programmer and the low level WIN32 is not OOP based, but is procedurally based. Only later did Windows add OOP on top, but the core of Windows is not OOP based. OOP adds complexity and overhead and the core OS needs performance and minimal overhead. Embedded programmers experience a similar experience when they have to work with minimal hardware and every clock cycle matters. The problem with procedural programming is that many programmers did not learn the importance of reusing code by building quality libraries. Procedural code can be encapsulated. That means well written libraries. Well written libraries need to built in what I like to call "the three tier" method. What is that ? (1) Low level (2) Medium level (3) High level Procedural libraries start with the low level routines which are the basis of the core functionality. But after awhile much of the core functionality is covered, so then a more medium level set of routines need to be built upon the backend of the low end routines. Medium level routines can be quite complex, but they should still be relatively simple in purpose. As you build the medium level part of a library, then one can build the much higher level routines, which be quite complex and extensive. This three tiered approach produces a tight, efficient library and IMO can perform better than any OOP library and will be significantly smaller and faster than a comparible OOP library. Modular design existed long before OOP. OOP was just a way to "force" modular design, rather than a better solution. Well written, modular, procedural code will better perform than any OOP counterpart. Also in the embedded world, where minimal hardware exists, the use of such modular procedural coding will be able to push such hardware to the limits, whereas OOP based code will have limits.

                    1 Reply Last reply
                    0
                    • C charlieg

                      My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                      Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                      S Offline
                      S Offline
                      SeattleC
                      wrote on last edited by
                      #44

                      I did a large embedded project in classic C++, a collection of a dozen devices communicating over IEEE-488 with a PC. Our project was very object-oriented. The project was code-named "brick" since we intended to stack devices together like bricks. Our use of OOP was very successful, though the product itself failed in the marketplace. Code Reuse: Our previous similar project was coded in C, and it was a quarter-million lines of wet spaghetti that we were ordered to reuse. It, in turn, was the result of an order to reuse a previous C project. Attempts at reuse were an abject failure, and we wrote 100% new code for the brick. The old code was so undocumented and hard to read that we had to reverse-engineer the behavior of the hardware. Inheritance: Code was successfully reused within the brick. Inheritance promotes factoring of common code into base classes rather than cut & pasting it. We used a multiply-inherited polymorphic mixin to control communication between software subsystems. The mixin let us defer and change the decision about what code to execute on the PC and what code to execute on the brick. This was incredibly fortunate because the hardware part of the project went way behind schedule. Polymorphism: One difference between the brick and the previous product was that in the brick, code could directly control hardware devices like A/D converters, where on the previous project, the hardware was accessed over a bit-parallel protocol using the PC's parallel printer port (uck). We were able to prototype and test a lot of hardware control code using the previous device's hardware. We had polymorphic classes with one derivation to communicate with the brick's hardware, and another to communicate with the old hardware. As I said before, this was very fortunate because the hardware was so late. Issues: This project was long enough ago that a virtual function call was expensive. Performance was very important to us, so we worried about every virtual function. Another issue was that the hardware of this project was mostly a big sequencer ASIC that ran its own programs written in a generic macro-assembler that we had repurposed. There was no getting around the fact that much of the code was one big-ass class with a zillion methods. Normally this would be bad style, but how do you factor hardware? The programs for this sequencer were things we absolutely had to reuse from previous projects, our "secret sauce" as it were. We did not even understand the f

                      1 Reply Last reply
                      0
                      • C charlieg

                        My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                        Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                        D Offline
                        D Offline
                        Dale Barnard
                        wrote on last edited by
                        #45

                        These days, I am not as wed to object-oriented design as I used to be. However, I focus heavily on the S.O.L.I.D. principles whether in stand-alone functions or class/module design. Those principles pay dividends even if code never gets reused. I have also found that new requirements come out of left field sometimes, and having S.O.L.I.D. code makes it easier to adapt or reuse things that were never intended to be reused.

                        1 Reply Last reply
                        0
                        • C charlieg

                          My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                          Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                          S Offline
                          S Offline
                          sasadler
                          wrote on last edited by
                          #46

                          I did embedded programming till I retired in 2019. I looked at OOD when it was becoming popular but it made much more sense to me to do composition. I wrote classes (eventually became templates) for things like FIFOs, Queues, timers, digit filters, tone detectors, tone generators, etc. These templates have been used in multiple projects over the last 20+ years of my career with little or no modifications (most modifications were due to compiler changes). I can say that I've had quite a bit of reuse of my personal template library. note: Surprisingly, I've never written code for an embedded device that had a display. Also, except for an inherited legacy device, every device I coded for had way less than a meg of RAM.

                          1 Reply Last reply
                          0
                          • C charlieg

                            My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                            Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                            P Offline
                            P Offline
                            Paul Gehrman
                            wrote on last edited by
                            #47

                            OOP sucks primarily because you've got all these astronauts who are obsessed with silly, bloated design patterns. One of the biggest issues is that our CS educational system is broken because it promotes this trash as good design. The other issue is all these books, blogs, etc. that promote over-architected solutions to relatively simple problems. Gang of Four design patterns are mostly passe, and all of the advanced developers I know barely give those the time of day any longer. But, alas, how will narcissistic developers prove how smart they are, if not by the silly application of arcane design patterns? Paul Gehrman

                            1 Reply Last reply
                            0
                            • C charlieg

                              My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                              Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                              U Offline
                              U Offline
                              User 2893688
                              wrote on last edited by
                              #48

                              Maybe you should read Mythical Man Month. OS, System Software Embedded and Drivers are areas where OOD has had small success because most companies haven’t figured out how to do it right on those spaces. But ironically the rise of NextStep and the fall of OS/2 teach us that it’s worth the risk investing on OOD on those spaces. So don’t be hard on yourself for NOT GETTING IT where others can. You just live in a different paradigm and adjusting takes time. Just remember how many iterations Coca Cola has had with Coke Zero just trying to emulate the original taste in a sugarless environment.

                              1 Reply Last reply
                              0
                              • C charlieg

                                lol, true. I'm an EE, written lots of code but I've always wanted to take an algorithms class. Don't know why, well, at least my analyst asks me.

                                Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759

                                R Offline
                                R Offline
                                Roger Wright
                                wrote on last edited by
                                #49

                                I'm an EEE myself, and written a whole bunch of algorithms, but I've never seen an algorithms class. What would that teach? Some of my engineering textbooks included digital implementations of the math they taught. That sort of thing? Some of the ones I came up with in my youth actually got published as company standards - like my classic algorithm for generating code to filter out ambient noise in a factory environment at run time. It drove the QA types nuts, because every time the test application ran, it was different code executing; they hate that stuff. Today, I don't know of any language that gives an application to modify itself while it's running, so I could never duplicate that one. But it was fun!

                                Will Rogers never met me.

                                1 Reply Last reply
                                0
                                • C charlieg

                                  My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                  Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                  K Offline
                                  K Offline
                                  KateAshman
                                  wrote on last edited by
                                  #50

                                  I did 15 years of research, specifically on the topic of OOP, and I've come to the conclusion that abstraction is mostly pointless beyond modeling data providers. When you have 2 distinct but very similar looking problems, it's better to have 2 distinct but very similar looking functions to solve those problems. Turns out that's the most efficient solution. Compilers don't care about lines looking the same, they don't produce slower code because of it. At the same time, junior devs can understand similar looking code faster, because they notice both the similarity and the differences, and naturally wonder why both exist, which lowers the learning curve. Turns out only OOP-experienced developers care about avoiding redundancy in the literal sense, because they feel like it impedes either maintainability or efficiency, which is factually wrong. OOP sacrifices both those properties for scalability, and gets misattributed with them anyway. It's kinda a thing in our field. Whenever something new and shiny arrives, people assume it solves every problem they currently have.

                                  1 Reply Last reply
                                  0
                                  • C charlieg

                                    My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                    Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                    B Offline
                                    B Offline
                                    BotReject
                                    wrote on last edited by
                                    #51

                                    I can offer encouragement rather than practical advice (since I have no specifics about the problem you are trying to solve). You are absically correct: encapsulation is the most useful feature of OOP and the only feature code needs to be OO. (Many claim that certain languages are OO simply because they support an optional 'object' construct, but unless encapsulation is enforecd, with the option of strong encapsulation, then a language is not OO in my opinion). Inheritance should only be used very sparingly - it is good for complex frameworks that have to support a multitude of applications, like teh Java framework, but not useful at all in most cases. As for generics ... I'm still thinking about that one. I would say that generics are about more than code reuse. Do you really want a type as a parameter in your application? Is it really saving hastle to avoid explicit type casts? Who is the end-user of the code and is it more important to detect type errors at compile-time rather than runtime? In the end you have to weigh-up the hastle to you as coder against the hastle to the user! That probably doesn't help, but it was an interesting query.

                                    1 Reply Last reply
                                    0
                                    • C charlieg

                                      My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                      Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                      C Offline
                                      C Offline
                                      charlieg
                                      wrote on last edited by
                                      #52

                                      Sorry, I'm behind. Covid in the house (nothing serious), but the replies here are fantastic. I really appreciate some of the details. Tomorrow I read more :)

                                      Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759

                                      1 Reply Last reply
                                      0
                                      • C charlieg

                                        My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                        Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                        M Offline
                                        M Offline
                                        Martin ISDN
                                        wrote on last edited by
                                        #53

                                        in my opinion OOP, FP and procedural programming are more a way of thinking. it's how you approach a problem. i have a strong distaste for single paradigm languages. for me, if those languages were people they would have surely been racist. not only they have chosen to do things in only one way, but they have heavily preached that their way is the only true way and ridiculed others. Javaheads have been known to express their superiority vs people using C, Perl, JavaScript... i considered the later to be far superior than Java since day No.1, C in one way, Perl and JS in another way. i was exposed to OOP via C++ and Turbo Pascal. what i am going to say next is thus of concern to statically typed languages of the Algol family (C++, C#, Java) and does not have meaning to CLOS LISP, Smalltalk, JavaScript, REBOL... and i find the later to be superior for OOP. Encapsulation - i see this as only the shortening of the visible scope. aside from that inside the encapsulated area you still deal with structural programming tools. also, classes are inferior in encapsulation to ADTs in C. when a library or a module in C exposes it's ADT as an opaque struct, opaque pointer or should i say a handle, that is when you are really working with a "blob of data". it's not even a blob, it's just a name. it's called an opaque pointer, but it's not a pointer at all, because you cannot de-reference it yourself. when they give you the definition of a class i.e. the data type descriptor, is when you already know to much to be called encapsulation. Inheritance - for the type of languages that we speak, statically typed, OOP folks see it as a liberation from a world that has nothing resembling a first-class functions, while LISP folks see OO as a prison. inheritance is the liberating force in OOP because it lets you go around the type system that is too strict. not by any virtue as it is always represented (it had never happened to me to misuse a reptile where i should have used a mammal), but by the lack of abstraction from the von Neumann architecture and the basic data types in CPU usage, namely: integer and float. in that respect Java has not gone far in abstraction than the C Abstract Machine. inheritance is not about code reuse. every time you see a method overridden you see the braking of the OOP promise of reuse. inheritance is to losen the grip of the strict typing and it's most useful for building interface compliance. inheritance is mainly a specification technique rather than an implementation technique [Stand

                                        1 Reply Last reply
                                        0
                                        Reply
                                        • Reply as topic
                                        Log in to reply
                                        • Oldest to Newest
                                        • Newest to Oldest
                                        • Most Votes


                                        • Login

                                        • Don't have an account? Register

                                        • Login or register to search.
                                        • First post
                                          Last post
                                        0
                                        • Categories
                                        • Recent
                                        • Tags
                                        • Popular
                                        • World
                                        • Users
                                        • Groups