Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. OO Software design epiphany - it might not matter

OO Software design epiphany - it might not matter

Scheduled Pinned Locked Moved The Lounge
oophardwarehelpquestiondesign
53 Posts 33 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C charlieg

    My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

    Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

    R Offline
    R Offline
    raddevus
    wrote on last edited by
    #13

    My theory is that there are very few examples around that bring OOP all together. OOPs main purpose should be reuse and making the code that you write smaller (less code to deal with when adding enhancements or fixing bugs). I believe that with one very focused example you would see all of PIE-A (Polymorphism, Inheritance, Encapsulation and Abstraction) come together. But most of the time you don't need this type of architecture until things get large. And, most projects don't get large -- especially samples you see. Here's My Attempt This should be an article but I'll do that later. The Entire Premise Imagine you want to save data to three different data stores: 1) file 2) database 3) web location Save instantly becomes our main verb(functionality). Requirements: We Want Four Things 1. Any dev must be able to include the Save() functionality on their class in the future. (interface) 2. Any dev must be able to call the Save() functionality on any class in the future and easily know that it is named Save() -- this is self-documenting code 3. There must be an easy way for dev to configure where the data will be stored (file, db, url) 4. A dev must be able to create a list of various types (classes in the architecture) and iterate through them, calling Save() and knowing that they will save to their appropriate destination. This is Polymorphism -- all objects implement the Interface which provides Save(). Here is the smallest sample I can come up with and it really works. Get LINQPad - The .NET Programmer's Playground[^] and run the code below. You will see the following output:

    I'm saving into a FILE : super.txt
    I'm saving into a FILE : extra.txt
    I'm saving into a DATABASE : connection=superdb;integrated security=true
    I'm saving into a WEB LOCATION : http://test.com/saveData?
    I'm saving into a FILE : super.txt
    I'm saving into a FILE : extra.txt
    I'm saving into a DATABASE : connection=superdb;integrated security=true
    I'm saving into a WEB LOCATION : http://test.com/saveData?

    Now a dev can 1. create an IPersistable object. 2. pass in an IConfigurable object (which determines which data store the Save() will write to 3. call Save() on the object Dev Only Needs To Know Two Things: Abstraction 1. create a configurable object -- select which data store 2. call Save()

    void Main()
    {

    1 Reply Last reply
    0
    • C charlieg

      My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

      Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

      Greg UtasG Offline
      Greg UtasG Offline
      Greg Utas
      wrote on last edited by
      #14

      I also worked in what you could call the embedded world, though not so close to the hardware, and soft rather than hard real-time. An OO rewrite saved the product I was working on, and it's still seeing development over 20 years later. We used all three (encapsulation, inheritance, and polymorphism) extensively.

      Robust Services Core | Software Techniques for Lemmings | Articles
      The fox knows many things, but the hedgehog knows one big thing.

      <p><a href="https://github.com/GregUtas/robust-services-core/blob/master/README.md">Robust Services Core</a>
      <em>The fox knows many things, but the hedgehog knows one big thing.</em></p>

      D 1 Reply Last reply
      0
      • R Rage

        den2k88 wrote:

        AutoSAR

        :omg: Someone else on CP who knows Autosar !

        Do not escape reality : improve reality !

        D Offline
        D Offline
        den2k88
        wrote on last edited by
        #15

        Got thrown into it about a year and a half ago. It was bound to happen, working in a company that has 80% automotive contracts.

        GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

        R 1 Reply Last reply
        0
        • C charlieg

          Very similar to my situation, but in my case I have to produce a visual representation of said data (that varies wildly) and must be pushed to different display formats. I found the time to produce a class that could be inherited as not worth the effort. If the code never changes (note my comment - 10 years), why bother with the investment? I like the idea of just being able to inherit from a base class, but it happens to rarely.....

          Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759

          D Offline
          D Offline
          den2k88
          wrote on last edited by
          #16

          Firmware developement follow different rules than software, there's no circling around it. Software should not depend on the underlying implementation details, firmware is the underlying implementation and it's all about details. Some things should be kept as agnostic as possible, e.g. the main state machine should not depend on the exact make of the various hardware components so interfaces to control hardware should be generic (i.e. peripheral_On, peripheral_Off, peripheral_Sleep, peripheral_Send...) but all the rest can not. One component may be turned on/off via the combination of two pins while another, identical on every aspect, may require timed pulses on a single pin and follow a protocol based on several outputs.

          GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

          P 1 Reply Last reply
          0
          • Greg UtasG Greg Utas

            I also worked in what you could call the embedded world, though not so close to the hardware, and soft rather than hard real-time. An OO rewrite saved the product I was working on, and it's still seeing development over 20 years later. We used all three (encapsulation, inheritance, and polymorphism) extensively.

            Robust Services Core | Software Techniques for Lemmings | Articles
            The fox knows many things, but the hedgehog knows one big thing.

            D Offline
            D Offline
            den2k88
            wrote on last edited by
            #17

            The farther you get from the hardware the more generalized approach are useful / a necessity. I worked in a product with similar specifications ("not so close to the hardware, and soft rather than hard real-time.") and OOP was a huge benefit, when we started adopting it there has been a significant improvement in quality, developement time, customization time and stability.

            GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

            1 Reply Last reply
            0
            • C charlieg

              My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

              Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

              M Offline
              M Offline
              Marc Clifton
              wrote on last edited by
              #18

              You are quite correct, IMO. First, forget about the promise of re-use when it comes to objects. Nobody ever does anything in the real world twice exactly the same way to merit any re-use benefit. In order of importance, to me: 1. Encapsulation - keeps stuff organized 2. Interfaces - defines what the class is expected to implement. I also use empty interfaces simply to indicate that the class supports some other behavior. Could use attributes for that as well, but interfaces are sometimes more convenient when dealing with a collection of classes that all support the same thing and there are methods that operate on that, hence I can pass in "IAuditable", for example. 3. Inheritance/Abstraction - mostly useless, but there are times when I want to pull out common properties among a set of logical classes. Note that I don't consider this to be true abstraction, it's using inheritance to defined common properties and behaviors. 4. Polymorphism - useful, but less so now with optional default parameters that do the work of what one often used polymorphic methods for. IMO, the reality of "how useful is OO" falls quite short of the promise of OO.

              Latest Articles:
              Client-Side Type-Based Publisher/Subscriber, Exploring Synchronous, "Event-ed", and Worker Thread Subscriptions

              P 1 Reply Last reply
              0
              • R Rage

                den2k88 wrote:

                AutoSAR

                :omg: Someone else on CP who knows Autosar !

                Do not escape reality : improve reality !

                N Offline
                N Offline
                Nelek
                wrote on last edited by
                #19

                I was almost thrown in a project where AutoSar was involved. Luckily for me, another project started to burn down and I was sent there to extiguish the fire in the last moment and I could remain in my blessed ignorance.

                M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

                1 Reply Last reply
                0
                • R Rage

                  charlieg wrote:

                  Are you from Stack Overflow

                  Since he did not ask you if you to first go get a CS degree before posting questions, he is probably not.

                  Do not escape reality : improve reality !

                  C Offline
                  C Offline
                  charlieg
                  wrote on last edited by
                  #20

                  lol, true. I'm an EE, written lots of code but I've always wanted to take an algorithms class. Don't know why, well, at least my analyst asks me.

                  Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759

                  R 1 Reply Last reply
                  0
                  • C charlieg

                    My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                    Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                    U Offline
                    U Offline
                    User 13269747
                    wrote on last edited by
                    #21

                    I tend to take the view that isolation is a good target. Code isolated from other code is both maintainable and reusable, regardless of whether it is via an OO design or not. Reuse is limited when using any OO language, because then any program that wants to reuse that code has to use the same language. If you're going after reuse, you have to give up OO because they are mutually exclusive. All the re-used code are written in a non-OO language. Sqlite, for example, is one of the most widely deployed pieces of code in the world. All media format readers, as well, are widely deployed and non-OO. If you write something really new and novel that does not exist (a new image format, a new protocol, new encryption algo, new compression format, interface to any of the above, or interface to existing daemons (RDBMSs, etc)), and you write it in Java or C#, the only way that it can become popular is if someone re-implements it in C so that Python, C, C++, Java, C#, Delphi, Lazarus, Perl, Rust, Go, Lisps, Php, Ruby, Tcl (and more) programs can use it. The upside of producing library files (.so or .dll) that can be used by any language is that the result is also quite isolated and loosely coupled from anything else:

                    • It can be be easily extended by anyone, but not easily enhanced.
                    • It can be easily swapped out and replaced with a different implementation without needing the programs using that library to be recompiled, redeployed or changed in any way.
                    • Because it is a library, it will only be for a single type of task (no one would even think of putting unrelated functionality into a compression library, but I've seen devs happily put in unrelated stuff into a compression class).

                    Ironically, you can more easily achieve SOLID principles writing plain C libraries (.so or .dll) than you can with actual OO languages, because of the limitations of the call interface in dynamic libraries.

                    1 Reply Last reply
                    0
                    • C charlieg

                      My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                      Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                      N Offline
                      N Offline
                      NelsonGoncalves
                      wrote on last edited by
                      #22

                      fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?

                      Well, if it has not changed in 10 years I would say it is general enough :-D I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.

                      G 1 Reply Last reply
                      0
                      • N NelsonGoncalves

                        fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?

                        Well, if it has not changed in 10 years I would say it is general enough :-D I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.

                        G Offline
                        G Offline
                        giulicard
                        wrote on last edited by
                        #23

                        NelsonGoncalves wrote:

                        Usually because I decide to make a class for something that will only have one object instance.

                        Usually, when I see that a class is instantiated only once, like some sort of singleton, I always think it might be wise to drop the class and put everything in a namespace.

                        N 1 Reply Last reply
                        0
                        • G giulicard

                          NelsonGoncalves wrote:

                          Usually because I decide to make a class for something that will only have one object instance.

                          Usually, when I see that a class is instantiated only once, like some sort of singleton, I always think it might be wise to drop the class and put everything in a namespace.

                          N Offline
                          N Offline
                          NelsonGoncalves
                          wrote on last edited by
                          #24

                          Yes, I agree. But I start with the "everything is an object" mentality, and only latter do I recognize that, for my applications, most of the times that is the wrong way to think. I could spend 1hr thinking on the design, but that would just prevent me from latter spending 2 days fixing bad design decisions ;P

                          1 Reply Last reply
                          0
                          • C charlieg

                            My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                            Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                            _ Offline
                            _ Offline
                            _WinBase_
                            wrote on last edited by
                            #25

                            I know where you are coming from but would think *it depends* on what you are working on. some projects and cases may merit it and some not, and although i dont work in embedded would assume that it's less important. however as a 'business app guy' i find it great, however if i really looked at it i probably don't need it as much as i think i do, but it's the way i roll now and i quite like it :) GL

                            1 Reply Last reply
                            0
                            • D den2k88

                              Got thrown into it about a year and a half ago. It was bound to happen, working in a company that has 80% automotive contracts.

                              GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

                              R Offline
                              R Offline
                              Rage
                              wrote on last edited by
                              #26

                              I work in the company that kind of created it, so avoiding it completely is not possible. :-D

                              Do not escape reality : improve reality !

                              D 1 Reply Last reply
                              0
                              • C charlieg

                                My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                E Offline
                                E Offline
                                Ed Korsberg
                                wrote on last edited by
                                #27

                                I do embedded firmware full time. We use all of the OO features you have mentioned. For the most part this has been done as a positive effort, ie it was worth the effort. But lately I think we have taken it too far and the code is too difficult to follow and debug and also our performance seems really bad. We are now in the process of going back (yet again) to profile, study and do performance measurements. But I think it is a case of having too much of a good thing. We have overused such features on an embedded design with limited memory and tight timing requirements.

                                P 1 Reply Last reply
                                0
                                • C charlieg

                                  My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                  Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                  D Offline
                                  D Offline
                                  DumpsterJuice
                                  wrote on last edited by
                                  #28

                                  My Opinion (Based on experience), Every implementation of Polymorphism, on every team that used it, goes sideways pretty quickly, and becomes quite chore to maintain. OOP principles are generally sound, but I have not even considered using Polymorphism in 20 years. Mainly because its harder to debug. BUG: "The Investor Customer is disturbing the customerChecking balance because they customer is also an investor" - Resulting in the loss of his mortgage. Keep It Simple, keep it moving.

                                  1 Reply Last reply
                                  0
                                  • D den2k88

                                    Firmware developement follow different rules than software, there's no circling around it. Software should not depend on the underlying implementation details, firmware is the underlying implementation and it's all about details. Some things should be kept as agnostic as possible, e.g. the main state machine should not depend on the exact make of the various hardware components so interfaces to control hardware should be generic (i.e. peripheral_On, peripheral_Off, peripheral_Sleep, peripheral_Send...) but all the rest can not. One component may be turned on/off via the combination of two pins while another, identical on every aspect, may require timed pulses on a single pin and follow a protocol based on several outputs.

                                    GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++*      Weapons extension: ma- k++ F+2 X

                                    P Offline
                                    P Offline
                                    PhilipOakley
                                    wrote on last edited by
                                    #29

                                    Yeah. Reality bites. At some point we try to abstract away all the hardware bit stuffing, but it's still there ruining all the nice plans - all it needs is an auto resetting status bit when other bits in the byte are read to ruin everything.

                                    1 Reply Last reply
                                    0
                                    • M Marc Clifton

                                      You are quite correct, IMO. First, forget about the promise of re-use when it comes to objects. Nobody ever does anything in the real world twice exactly the same way to merit any re-use benefit. In order of importance, to me: 1. Encapsulation - keeps stuff organized 2. Interfaces - defines what the class is expected to implement. I also use empty interfaces simply to indicate that the class supports some other behavior. Could use attributes for that as well, but interfaces are sometimes more convenient when dealing with a collection of classes that all support the same thing and there are methods that operate on that, hence I can pass in "IAuditable", for example. 3. Inheritance/Abstraction - mostly useless, but there are times when I want to pull out common properties among a set of logical classes. Note that I don't consider this to be true abstraction, it's using inheritance to defined common properties and behaviors. 4. Polymorphism - useful, but less so now with optional default parameters that do the work of what one often used polymorphic methods for. IMO, the reality of "how useful is OO" falls quite short of the promise of OO.

                                      Latest Articles:
                                      Client-Side Type-Based Publisher/Subscriber, Exploring Synchronous, "Event-ed", and Worker Thread Subscriptions

                                      P Offline
                                      P Offline
                                      PhilipOakley
                                      wrote on last edited by
                                      #30

                                      I'd agree. Sounds like function interfaces with better, purposeful, naming of the why, rather than a focus on the what & how.

                                      1 Reply Last reply
                                      0
                                      • E Ed Korsberg

                                        I do embedded firmware full time. We use all of the OO features you have mentioned. For the most part this has been done as a positive effort, ie it was worth the effort. But lately I think we have taken it too far and the code is too difficult to follow and debug and also our performance seems really bad. We are now in the process of going back (yet again) to profile, study and do performance measurements. But I think it is a case of having too much of a good thing. We have overused such features on an embedded design with limited memory and tight timing requirements.

                                        P Offline
                                        P Offline
                                        PhilipOakley
                                        wrote on last edited by
                                        #31

                                        Just wondering: Are you implementing a higher level design model provided by others, or doing the higher level design model yourself, or implementing the design idea directly? The source of the design often impacts the firmware implementation approach (i.e. if/how Model Based Systems Engineering MBSE is being done).

                                        E 1 Reply Last reply
                                        0
                                        • C charlieg

                                          My version of a survey - I'm interested in actual experience. I'd post this in SO, but there are so many anal retentive ivory tower type there... I digress. FWIW, this is a little bit of soul searching, so take it as an honest question/statement/search and hell, suggest a book for me to read? I live mostly in the embedded world where things are tightly bound to hardware. It might be the problem. It might be that I'm trying to apply OOD to something that just doesn't warrant it. That said, i've been developing software (of all types) for 40 years, and at the application level, I have *yet* to see any significant code re-use other than copy/paste. I'm a (was?) big believer in OO design. I believe in Abstraction, Encapsulation, Inheritance, and (not)Polymorphism has (had) hope. But I believe that it suffers badly from being too general for what we do as developers. In no particular order: Abstraction - I like it. Hide the details. So far so good. The problem is that most make analogies to objects that don't have an elephanting to do with reality software development. It sounds good, it just doesn't work. Encapsulation - I love it. Hide the details, avoid spaghetti code, methods work with a blob of data. Inheritance - a plague, a virus, useless. Most examples are trivial. Give me one complex application example, and it all falls down. Polymorphism - meh. It's cute. Sure, I can create multiple methods to work with different parameters, but at the application level it is not that groundbreaking. ---------------------------------------------------------- So, in my project I've just spent 3 days (and some nights) trying to make some code generic and OOD and what not, and it's not going to happen. The more I try to make the class behave in a couple of different situations, it's just a boondoggle - which triggered me at 4 am - just copy the code to another function and hard code everything. Hence the question. fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general? I spend a lot of time trying to make code flexible (thinking long term support, etc), and I think I'm wasting my time. Sort of rambling here, I'd like some practical, pragmatic feedback.

                                          Charlie Gilley <italic>Stuck in a dysfunctional matrix from which I must escape... "Where liberty dwells, there is my country." B. Franklin, 1783 “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

                                          L Offline
                                          L Offline
                                          Lurk
                                          wrote on last edited by
                                          #32

                                          The real problem with OO Programming and possibly Design is that everyone knows the parts, but most don't put them together well. OO has worked well for years and it is visible at the low level. Consider the storage device. It can be a floppy (remember those?), a hard drive, an SSD, a write once, CD/DVD, an online storage, an so many more. But the block storage protocol is applied to all. the Driver converts the hardware to an object that responds to the same inputs no matter what is hidden behind the interface. The device is the object. The functionality is abstracted and hidden. The behavior is locked behind the wall. And the new types of devices are added invisibly to the higher levels of code by following the interface rules. Enhancements to the rules are added all the time by extending the interface for things that were not previously considered. The trick is in creating the specific, but only working with the generic. The failure is in creating specific necessary behavior at the derived level that cannot be used at the generic level. Hierarchy of Animal -> Quadruped -> Horse. Horse whinnies and gallops. But to implement them as Horse features means that they must be addressed as features of a horse. Instead, Animal Speaks(Friendly | Loudly | Fearfully ), Moves( Slowly | Quickly ) Now, we can say Animal->Speak(Friendly), Animal->Moves( Quickly). We can add Dog and a Cat and implement the interface described. Then we create the item and put it in Animal and use it without changing the code at all. The biggest trick of all, is to construct the Animal (or derived type) with as much of the descriptive information as possible. then use it with as little information as possible. Compare to the original Storage idea. the File Create and Open takes all kinds of descriptive information, but the actions on it take only the necessary variations. File->Read( howMuch, toWhere). File->Seek(toPosition, fromWhere). It isn't really the tenants of OO that are in question, it is the organization. Put them together and use them well and they provide for an easier path to expansion, adaptation and improvements. Print is a great example of Polymorphism. Print (integer) Print (string) Print (float) Print (format, arg1, arg2, arg3, ...) Print (Animal) OO is more than a Hammer. It is a whole Toolbox than can be used to build better tools and expandable structures.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups