Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. My first rant in a long time...

My first rant in a long time...

Scheduled Pinned Locked Moved The Lounge
questioncsharpwcfooptutorial
97 Posts 64 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • E Euhemerus

    Don't get me wrong, I'm in no way knocking OOP or the people that use it. My point was that OOP isn't the panacea to productive programming that advocates would have you belive. Below is an article that I sent to my lecturer when I did OOP at college. I must admit, I have to agree with the article's author. By Richard Mansfield September 2005 A Four J’s White Paper Computer programming today is in serious difficulty. It is controlled by what amounts to a quasi-religious cult--Object Oriented Programming (OOP). As a result, productivity is too often the last consideration when programmers are hired to help a business computerize its operations. There’s no evidence that the OOP approach is efficient for most programming jobs. Indeed I know of no serious study comparing traditional, procedure-oriented programming with OOP. But there’s plenty of anecdotal evidence that OOP retards programming efforts. Guarantee confidentiality and programmers will usually tell you that OOP often just makes their job harder. Excuses for OOP failures abound in the workplace: we are “still working on it”; “our databases haven’t yet been reorganized to conform to OOP structural requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read about OOP in a book, you need to work with it for quite a while before you can wrap your mind around it”; and so on and so on. If you question the wisdom of OOP, the response is some version of “you just don’t get it.” But what they’re really saying is: “You just don’t believe in it.” All too often a company hires OOP consultants to solve IT problems, but then that company’s real problems begin. OOP gurus frequently insist on rewriting a company’s existing software according to OOP principles. And once the OOP takeover starts, it can become difficult, sometimes impossible, to replace those OOP people. The company’s programming and even its databases can become so distorted by OOP technology that switching to more efficient alternative programming approaches can be costly but necessary. Bringing in a new group of OOP experts isn’t a solution. They are likely to find it hard to understand what’s going on in the code written by the earlier OOP team. OOP encapsulation (hiding code) and sloppy taxonomic naming practices result in lots of incomprehensible source code. Does anyone benefit from this confusion and inefficiency? When the Java language was first designed, a choice had to be made. Should they mimic the complicated, counter-intuitive punctuation, diction, and syntax us

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #86

    Interesting read. I suppose I agree, though I have personally found OO techniques very powerful to use and not that hard to implement. Perhaps I'm not a "pure" OOP developer as I use both OO and procedural techniques. I was never "indoctrinated" as one who thought any one tool, let alone OO was the only way to do anything. Yeah, learning the .Net way is certainly a learning curve but, as I said before, I've discovered tremendous leverage with the new technology. No, I don't use polymorphism and inheritance much myself so maybe I'm not truly OO. Even back in the days when I was using C++ I couldn't call myself truly a C++ developer. I was a procedural C developer that used C++ classes to simplify my code. Very interesting. I don't see anything about the article, though, that motivates me to change the methodology I'm using. I'm getting incredible results with the mixture of techniques I'm presently using. Interesting way to look at it, though. -Max :D

    1 Reply Last reply
    0
    • J Jason Christian

      An example is simple - I wrote a WPF application. Now there is a desire from our customers to have pieces of it available from the web. Since I separated the UI layer from the business logic layer, it is easy to build some Silverlight pages to show the information through a webpage. If I hadn't separated the layers, I'd basically be re-writing the application. Just because your bone-headed architect went overboard doesn't mean the fundamental idea is flawed - lots of good ideas can suffer through poor implementation. And crappy programmers can screw up code in any genre... that doesn't mean we should all go back to coding in assembly. (And for the record, I spent a good chunk of my career programming in RPG (procedural language) before switching to primarily OO languages, and I much prefer the OO languages).

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #87

      Yeah. I wrote assembly for years, then C/C++ then VB (3-6) for a long time. I wouldn't want to go back. Now I'm mostly using C# for anything new I code. After getting used to the .Net Framework I wouldn't go back. I still have to use VB6 which is fine too. It's all coding (which I love to do) so it doesn't much matter to me. However, as I said, anything new I'm creating right now is in C# and I absolutely love working with it. With the .Net Framework and the OO abilities in the language I'm pulling off stunts that would have taken me 10 times the code in previous years. ... and I'm NOT one of those kids coming out of the universities today; I've been at this for 35+ years. I've programmed in most of the major methodologies and I find using a mix of procedural technique with objects works best for me. -Max :D

      1 Reply Last reply
      0
      • Y Yortw

        Hi, I work on a point of sale system which is heavily OOP based, and the architecture has helped many times in allowing us to customise it for different retailers/markets without changing things for other customers, plugging in new 'device drivers' for POS peripherals, cuts down on development time for new sale and payment types, and has reduced repeated code making bug fixes easier, etc. Having said that, OOP can, like anything else be abused. As our project has grown over time the object model has become less clean and more complex making it harder to learn in the first place. Depending on what you're doing some things are still fairly simple and anyone who is familiar with basic OOP principles can achieve those things without a huge learning curve, but other things are defintely tricky. If I designed the whole thing from scratch there are things I'd do differently to try and simplify, but then that's always the case. My biggest complaint is actually the event-driven code execution you mentioned earlier (and said was fine). DOS apps I wrote 10+ years ago still run fine and I never receive support calls (except to order new rolls of receipt paper for customers in Samoa, apparently it's cheaper to by them here in NZ and ship them to the islands). I can't say that for any Windows/event based application I've written. I suspect that you could still write good procedural code in an OOP manner too, but event driven stuff while simple in theory and not always totally evil, always seems to make things harder or at least less robust, on any major real-world system in my opinion. Sadly, one way or another, we all seem to be doing event driven programming one way or another, even if we've covered it over with OOP. So let's all go back to writing DOS applications in a procedural manner, but with classes and inheritance :laugh: Of course, that's just my 2 cents worth...

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #88

        Yortw wrote:

        Having said that, OOP can, like anything else be abused. As our project has grown over time the object model has become less clean and more complex making it harder to learn in the first place. Depending on what you're doing some things are still fairly simple and anyone who is familiar with basic OOP principles can achieve those things without a huge learning curve, but other things are defintely tricky. If I designed the whole thing from scratch there are things I'd do differently to try and simplify, but then that's always the case.

        Exactly. I've worked in shops over my 35+ years where the same kinds of problems were prevalent regardless of the programming paradigm. To just classify one method better or worse than another is really missing the point. You can write good or crap code in anything from assembly to C#. Any language will paint you in a corner if you're not clear on the DESIGN. Blaming the tools for the problem is foolish. OK, so someone don't like OOP? Fine ... write procedural code in .Net. Write big libraries of procedural code to call. Yeah, you'll still run into some OO requirements but you can keep it mostly procedural. -Max :D

        1 Reply Last reply
        0
        • M Misha MCSS

          This is very true in the industry. I have 20 years developent experience, and my OO experience started with C++, continued with Delphi, and now includes C#. IMHO, almost all the modern large projects I have worked in (using C#) suffer ridiculous over-layering and over-abstraction, to the point where absolutely nobody outside the lead designer has any idea how it all works. This seems to be so common that I can only imagine it is being taught at Universities and filtered through to those in the field. The main issue with this is that many of the lead developers seem not to know how to use their brains to determine whether a particular idea is actually useful to the project, rather than implementing, almost by rote, what they seem to know. It seems that many, if not the majority, of software developers these days do not seem to know how to think. Thinking takes time, and you cannot churn out code while you are thinking. I find it strange to watch some developers hitting their keyboard anywhere up to 90% of the time they are sitting at their desk. For me this is rarely above 10%. My best work is done walking the dog, taking a shower, etc, when I cannot hit the keyboard, where I can let my mind wander and come up with different ideas about tacking existing problems, unconstrained by having to code it straight away.

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #89

          Misha (CSI) wrote:

          It seems that many, if not the majority, of software developers these days do not seem to know how to think. Thinking takes time, and you cannot churn out code while you are thinking. I find it strange to watch some developers hitting their keyboard anywhere up to 90% of the time they are sitting at their desk. For me this is rarely above 10%. My best work is done walking the dog, taking a shower, etc, when I cannot hit the keyboard, where I can let my mind wander and come up with different ideas about tacking existing problems, unconstrained by having to code it straight away.

          I've come up with some of my best code fragments when in the shower and my machine in the other room! (No I don't code in the bathroom!) -Max :D

          1 Reply Last reply
          0
          • L Lost User

            Looks like it will be an interesting read ... I'll work my way through it tonight. I follow your point about it becoming a "religion". Well, yeah, people can get religious about *anything*. Ever got into an argument about text editors? Ever follow one of those Linux vs. Windows arguments? Talk about religious fervor (particularly the Linux guys). I agree that people get to a point where they say "this [meaning OOP] is the ONLY WAY to make this work" when that may not be the case. The original rant, though, seemed to be "religious" in the other direction; I.E. "I don't want no LAYERS and anything that's LAYERED is EVIL". Heh... Programming with OOP techniques is, to me, like anything else. If there's a tool there that I can use to make my code more operational I'll check it out. So, on that front you wouldn't call me a "purely" OOP developer - I'm just a developer that uses certain features of OOP where they make sense just like I'll use a socket wrench instead of my bare hands when it makes sense. There are also religionists that think programming for the Web (I.E. Web Browsers) is the ONLY thing to do also. I don't believe that either. I prefer a Windows Client any day of the week. Connected across the net? Sure thing. Running in a browser? Naah. Make sense? ;-) -Max :D

            E Offline
            E Offline
            Euhemerus
            wrote on last edited by
            #90

            Max Peck wrote:

            Programming with OOP techniques is, to me, like anything else. If there's a tool there that I can use to make my code more operational I'll check it out. So, on that front you wouldn't call me a "purely" OOP developer - I'm just a developer that uses certain features of OOP where they make sense just like I'll use a socket wrench instead of my bare hands when it makes sense.

            I'm in 100% agreement with you. The right tool for the right job, that's where the skill lies; knowing which is the right tool.

            Max Peck wrote:

            I follow your point about it becoming a "religion". Well, yeah, people can get religious about *anything*. Ever got into an argument about text editors? Ever follow one of those Linux vs. Windows arguments? Talk about religious fervor (particularly the Linux guys).

            I've seen these and think 'hey, each to his own guys'

            Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg

            1 Reply Last reply
            0
            • J Johnny J

              OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

              C Offline
              C Offline
              CodeHawkz
              wrote on last edited by
              #91

              Hey there, It's high time someone came out with that. I too agree with you, but only to a point. Let me elaborate :) In modern days, the words "layering" and "decoupling" are used to show that you know it all rather than being used to get out of programming problems. The project I am working right now is using so many abstractions and layering when I asked "why is this layering done here? For what purpose?", my architect felt in an awkward silence :sigh: in front of everyone and said "it's how this framework is". :laugh: I had to drop my pen to hide my disappointment. I mean these design principles are being developed for a good reason, to solve so many problems. But you should not forget that these design principles should only be used where necessary. There is nothing like "use it everywhere you can". I recently had to write a small program for a friend to help him map out his speeding and timing to different sections. What I used was a very simple (you may even call it novice level) 3-tier architecture but I did not use a generic data layer, all I used was "System.data.sql" straight out for SQL Server. There was never going to be a database engine change. lol, mind you even if it did, all i have to is re-write few classes. But I did get to change the UI around and implement the ability to change between "mph" to "kmph" which was capable with that architecture I followed. So long story in short, OOP is something that helps you to be organized and is something that I love. It's just that, some people are obsessed with these design principles and jam everything into it to show that they know-it-all. But in reality, they do not and the whole point of OOP is lost. Where is the damn maintainability? :sigh: One last point, you are writing a program, just check what would change most probably and account for those in design. Not what if an asteroid hit and disturb my logger. RIP over-engineering freaks ;P Don't hate OOP, hate the idiots :D Cheers

              1 Reply Last reply
              0
              • J Johnny J

                OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                J Offline
                J Offline
                Johnny J
                wrote on last edited by
                #92

                A big thanks to everybody who has taken the time and interest in replying to my rant no matter if the reason was to agree with me or "educate" me... :) I'm very happy that so many of you have participated in the discussion and to be honest, I was very pleased to learn that I'm not alone. A lot more people than I would have thought seem to agree with me, completely or partly. I'm also very pleased that all of the replies have been made in a clean and serious way - without unpleasantness and personal attacks - no matter what side your on. I've read all of your responses, but only had the time to responded myself to a few of them. A lot of you make some good points (and some of you talk utter nonsense ;P ) Nothing is all black or white, and there are pros and cons to everything... :) The only thing I'm a bit disappointed about is that none of you picked up on my P Off remark[^] (around 3:45) :sigh:

                1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!

                1 Reply Last reply
                0
                • D David Knechtges

                  I can - I have an application that I layered using object orientation. I have been able to reuse two of the objects in 4 other projects unchanged, and another 3 objects in two other projects unchanged also. It really comes down to architecting the solution. I find that too many times people just jump in to the coding without really thinking about the problem they are trying to solve. Then they just graft change after change on top of each other until you get the mess that you are dealing with now. Like anything else in software development, object orientation is another tool in the toolbox. At times it is the right tool to use, in others it is not. It is the RESPONSIBILITY of the developer to choose the right tool for the job, just like a mechanic working on a car, or a doctor working on a person.

                  S Offline
                  S Offline
                  swampwiz
                  wrote on last edited by
                  #93

                  I was doing a gig (as a contractor) to develop a Windows application that would write test scripts for an embedded system. There were many different configurations for the scripts, so I immediately sensed the need to architect a very extendable application. So rather than jump right in get something that worked for one script, I set up an elaborate system that would be able to handle anything. A few months later, as I was putting the finishing touches on the system, my manager called me in and wondered why I didn't have anything completed. I explained, but he said that was not a good excuse, so he canned me on the spot. :mad: So much for implementing good design ... X|

                  1 Reply Last reply
                  0
                  • J Johnny J

                    OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                    M Offline
                    M Offline
                    mrchief_2000
                    wrote on last edited by
                    #94

                    Johnny J. wrote:

                    The system architecht one day proudly told me that the code was divided in SEVEN different layers.

                    It's called 7 layers of job security. If you're not programming to build/contribute to your own project or open source project, then 'programming' becomes a tool to job security. These may not be the smartest programmers by the book, but they are smart enough to rake the moolah in. That's simply the reality of our world and it's hard to digest, I know. I call it 'onion' programming personally but I have seen people become chief architect due to their inventions of these layers.

                    1 Reply Last reply
                    0
                    • M Misha MCSS

                      This is very true in the industry. I have 20 years developent experience, and my OO experience started with C++, continued with Delphi, and now includes C#. IMHO, almost all the modern large projects I have worked in (using C#) suffer ridiculous over-layering and over-abstraction, to the point where absolutely nobody outside the lead designer has any idea how it all works. This seems to be so common that I can only imagine it is being taught at Universities and filtered through to those in the field. The main issue with this is that many of the lead developers seem not to know how to use their brains to determine whether a particular idea is actually useful to the project, rather than implementing, almost by rote, what they seem to know. It seems that many, if not the majority, of software developers these days do not seem to know how to think. Thinking takes time, and you cannot churn out code while you are thinking. I find it strange to watch some developers hitting their keyboard anywhere up to 90% of the time they are sitting at their desk. For me this is rarely above 10%. My best work is done walking the dog, taking a shower, etc, when I cannot hit the keyboard, where I can let my mind wander and come up with different ideas about tacking existing problems, unconstrained by having to code it straight away.

                      J Offline
                      J Offline
                      jschell
                      wrote on last edited by
                      #95

                      Misha (CSI) wrote:

                      This is very true in the industry. I have 20 years developent experience, and my OO experience started with C++, continued with Delphi, and now includes C#. IMHO, almost all the modern large projects I have worked in (using C#) suffer ridiculous over-layering and over-abstraction,

                      Technology is not the cause of architecture/design failures. Nor is techology the cause of failures driven by market needs (real or imagined) that require much reduced project delivery cycles. However technology might it fact be part of the reason that shorter cycles are possible at all (regardless of the success rate).

                      1 Reply Last reply
                      0
                      • J Johnny J

                        OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                        S Offline
                        S Offline
                        Spectre_001
                        wrote on last edited by
                        #96

                        Been there - I feel your pain. That does not, however mean that OOP is necessarily a bad practice. There are, however two OOP principles that seem to go by the wayside in a lot of projects that tackle many of the inherent problems. They came from the DDE/COM eras (perhaps even earlier): 1. Imutability of Interfaces - Once an object interface is published, it should remain immutable, that is to say if I write an object (a) and you use it in your code, if I need to add functionality to that object I can't just change the methods already published in the interface - that would break your code. I can do either of two things, I can add new methods to the interface (leaving the existing ones alone), or create a new object (b) that aggregates the old one (a) and extends its functionality. This approach allows older code to continue to function as it always has and provides new functionality for newer code to utilize. 2. Documentation - If there is a place for developers to look to see if the functionality they need for their code already exists, it helps prevent re-inventing the wheel, so to speak, every time someone new is writing code. In addition, in order for this be be truely useful, the developers must cultivate a culture of updating the documentation anytime truely new functionality (objects) are added to the system. This does not refer to user documentation, but rather to developement documentation. Application Layering: In the days of n-tier architecture, layering was a means of reducing a seemingly insurmountably large problem into several smaller more manageable pieces. It made sense - and still does. However, for it to truely solve the complexity problem the layering has to have some thought behind it. The guiding principle here should be separation of concerns. UI, Business Processing, and Data are the major concerns addressed in most systems. Anything beyond that is usually overkill on the layers (most everything, with a little thought will fit nicely into one of the three). UI and Processing can be in the same layer, but it usually makes sense, in a large scale enterprise application, from a extensibility/maintainability perspective to keep them separate. Most modern development frameworks/patterns (Microsoft's MVC, Spring.NET, etc.) operate using this principle.

                        Kevin Rucker, Application Programmer QSS Group, Inc. United States Coast Guard OSC Kevin.D.Rucker@uscg.mil "Programming is an art form that fights back." -- Chad Hower

                        1 Reply Last reply
                        0
                        • J Johnny J

                          OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                          O Offline
                          O Offline
                          ohmyletmein
                          wrote on last edited by
                          #97

                          I'm with you! Well partly. My father was a developer with his own business (we employed 300+ people) I began there writing Clipper code in the 90's. I have since moved through the world of delphi, vb, .net and the like. I have worked with traditional style, sequential non event driven code , to COM+, and so on. The project I currently work on is OO based. My preference has become somewhere in the middle. I appreciate OO however also recognise that the end result is a system that does not perform well with large sets of data. Maintenance can be a nightmare. One example is the need for a new method parameter. By the time you have updated your interface, moved through all your inheritance chains and service layers, you have modified (in my case) about 20 files. You end up so far removed from where you started that its painful. In the end, all that was needed was a True or False to be passed to a method. There is the hack way of doing things and there is a purist way, but somewhere in between is "commercial reality". As for n-tier, I do believe there is benefit in separating the layers. One of those benefits is security related. If your transactional layer is segregated for example, it can exist on a server other than your web server and separate from your database server. This of course allows operations to sit behind fire walls, access controlled via authentication and restrictions on IP address. You may get to our web server but to make it to the database is quite unlikely. You will find no connection strings or the like at the web level to help you. My gripe, because I feel like adding to yours :) is reflection. Used well, it is powerful and very useful. However when you use it to start invoking code, late bound, you again create a maintenance nightmare. To give an example. I came across a method that iterates through some xml nodes. The end result, nothing.. why, because it doesnt provide one.. I think it was there just to keep the cpu warm. Now dare I remove it???? NO!. Sure, I can removed the code that does nothing, but I can not removed the method. It may or may not be called by reflection and if it is, how can you tell? Your concious developers who take the time to clean up as they go, remove old code and the like now wont do it. Who wants to be the one that does a reference search finds that a method is not referenced, then remove it, only to find out that everything falls over in 6 months time when you run one of those bi-yearly type financial functions? After sitti

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups