Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. My first rant in a long time...

My first rant in a long time...

Scheduled Pinned Locked Moved The Lounge
questioncsharpwcfooptutorial
97 Posts 64 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Johnny J

    OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

    H Offline
    H Offline
    Hiro_Protagonist_
    wrote on last edited by
    #61

    hi, like others already said, a system without abstraction is not maintainable. There are a lot of good examples out there, and most of them are working pretty well. Guess if you really want to know any, you would be pretty good in finding some. I have another question: What do you think would be the right approach? Functional? Procedural? Dynamic? Isn't it the task that has to accomplished that defines which is the best language and paradigm to use? I guess in your case you really should have an eye on some pretty well working (open source) projects. What about NHibernate? Eclipse? Mono? Is that all crap because of the architect in your team that wants to win the price of the worldwide-most-layers in a project? :) Holger

    1 Reply Last reply
    0
    • J Johnny J

      OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

      G Offline
      G Offline
      Gary Huck
      wrote on last edited by
      #62

      Waytogo, man! I feel your pain. As I read your rant I was figuring the follow-up comments would argue against; I was pleasantly surprised to find that was not the case. I made myself learn OOP for the obvious reasons. I wrote c code for 15 years and when we needed OOP-like stuff, we wrote it.

      J 1 Reply Last reply
      0
      • J Johnny J

        OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

        S Offline
        S Offline
        SeattleC
        wrote on last edited by
        #63

        Object Oriented design is a tool. Like any tool, a micrometer say, it can be used well to do fine and delicate work, or you can turn it over and use it like a hammer, which has predictably bad results for both the tool and the project. An OO system architected into more than three layers is probably being used like a hammer by somebody who doesn't know what his tool is for. It's a poor workman who blames his tools, but it's not too unreasonable to blame the workman. One thing I see is that the number of people capable of good OO design, as a fraction of all working programmers, is declining. I propose that OO design was invented during a time when only the geekiest and most motivated became programmers, and average developer I.Q. was higher. Nowadays, with every retreaded poet and astronomer writing code, there is perhaps a case to be made that object oriented programming shouldn't be taught to people who are really only cut out for add/change/delete screens and login pages.

        1 Reply Last reply
        0
        • J Johnny J

          OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

          M Offline
          M Offline
          Mike Ellison
          wrote on last edited by
          #64

          I like the rant, Johnny, particularly because it led to such great responses. Your pain here was worthy of a rant. I don't think I've ever had to work on a project designed in 7 layers. My mom use to make a great 7-layer bar, and my friend makes a killer 7-layer dip, but a 7-layer app sounds like a nightmare. There is great benefit though for those projects in which multiple presentations or supported devices are required (e.g. a web front-end + a desktop app + a mobile app all having to work with the same underlying business logic) and for the developer that needs or wants to support multiple database engines. When one presentation or data layer can simply be swapped out for another as needed, it is so much easier to support multiple platform use cases (and I would argue easier to develop for them too). But to me there is no question: if one creates layers in a way that isn't smart in design, the resulting app & development work gain none of the advantages of layers but only magnify the disadvantages.

          www.MishaInTheCloud.com

          J 1 Reply Last reply
          0
          • S Stuart Rubin

            First thing, Object Oriented and Layered and not the same thing. Often they are both used, and often they are not. The science and art of Software Engineering is about managing complexity. Of course, if you have a very simple system, you will probably want a very simple software design - one or few layers, no object-oriented constructs. But, as your system grows with just the slightest complexity, be it hardware, timing, protocols, etc., you need some sort of layering or you'll have a huge mess of code. Layering and Object Oriented methods MANAGE COMPLEXITY. That's all they do. One example of layering not discussed in the thread is the ability to divide the work amongst a team by areas of expertise. Without layering, everyone would have to know everything. Does the USB protocol expert need to know about the application? Maybe not. Does the UI guy need to know about TCP/IP? Does the battery charger guy need to know about the application? Layering also makes testing much more thorough. You can test one layer independently from another. Of course, without layers, you could never integrate third party software libraries, stacks, etc. Object Oriented also thrives in this department. Take a nice library from someone and subclass their objects to your needs. So, I do believe that your software and the people you work with suck. No argument here. But, in my experience, thoughtful up-front design of layers of abstraction are the best, quickest, most efficient, and safest (I'm a medical device guy route. Stuart

            J Offline
            J Offline
            Johnny J
            wrote on last edited by
            #65

            Stuart Rubin wrote:

            Object Oriented and Layered and not the same thing

            I've never said it was...

            1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!

            1 Reply Last reply
            0
            • M Mike Ellison

              I like the rant, Johnny, particularly because it led to such great responses. Your pain here was worthy of a rant. I don't think I've ever had to work on a project designed in 7 layers. My mom use to make a great 7-layer bar, and my friend makes a killer 7-layer dip, but a 7-layer app sounds like a nightmare. There is great benefit though for those projects in which multiple presentations or supported devices are required (e.g. a web front-end + a desktop app + a mobile app all having to work with the same underlying business logic) and for the developer that needs or wants to support multiple database engines. When one presentation or data layer can simply be swapped out for another as needed, it is so much easier to support multiple platform use cases (and I would argue easier to develop for them too). But to me there is no question: if one creates layers in a way that isn't smart in design, the resulting app & development work gain none of the advantages of layers but only magnify the disadvantages.

              www.MishaInTheCloud.com

              J Offline
              J Offline
              Johnny J
              wrote on last edited by
              #66

              Thanks - I'm very pleased with the responses myself - both the pros and cons...

              1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!

              1 Reply Last reply
              0
              • G Gary Huck

                Waytogo, man! I feel your pain. As I read your rant I was figuring the follow-up comments would argue against; I was pleasantly surprised to find that was not the case. I made myself learn OOP for the obvious reasons. I wrote c code for 15 years and when we needed OOP-like stuff, we wrote it.

                J Offline
                J Offline
                Johnny J
                wrote on last edited by
                #67

                GamleKoder wrote:

                As I read your rant I was figuring the follow-up comments would argue against; I was pleasantly surprised to find that was not the case.

                Me too... :)

                1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!

                1 Reply Last reply
                0
                • J Johnny J

                  OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                  P Offline
                  P Offline
                  Paul Gehrman
                  wrote on last edited by
                  #68

                  You are absolutely right. The OO/layering religion has been a MASSIVE failure. I've seen so many systems fail under this weight of this paradigm. It seems that developers like to prove how smart they are by "architecting" systems like this, but then they leave when the $%%&** hits the fan and its impossible to work with the monster they've created.

                  1 Reply Last reply
                  0
                  • J Johnny J

                    OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                    M Offline
                    M Offline
                    Michael Waters
                    wrote on last edited by
                    #69

                    Try taking a monolithic application that no one understands, ripping it apart into its constituent atoms, and rebuilding it with a layered architecture. Now THAT will teach you why layering is good paradigm. FWIW, layering is also a good idea when you want to account for future technologies/growth/expansion, so one day you can replace your C++ GUI front end desktop app with a Silverlight GUI fron end web app without needing to also change the underlying "business layer" code. OOP may encourage and provide sound approaches for good software design, but it neither garauntees nor requires it.

                    1 Reply Last reply
                    0
                    • J Johnny J

                      OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                      L Offline
                      L Offline
                      Lost User
                      wrote on last edited by
                      #70

                      Johnny J. wrote:

                      I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?" crap, because that is never going to happen in 99.9% of systems. Give me an example where someone has actually leaned back at a meeting and calmly said: "THANK GOD that we broke all the foobar code out in a separate foobar layer!" I challenge you - you can't.

                      Sure I can. EASILY. I'm building a time and attendance system. Needed code to handle the badge (magstripe) reader. Built an object to represent the reader. Now in my code I say: 1) Create a reader at port COM6 2) Read card The object knows to do everything associated with reading the badge which includes resetting the device, making sure the COM port is open to it, setting the LED to indicate it's in read mode, taking the swipe and returning 3 tracks of card data to the 3 variables I presented to it. The code associated with those two actions took me a couple of weeks to write and debug. It now exists in a "layer" that I can call with two lines of code. I don't have to worry whether it will work or not, I can create the object anywhere in my application and it will work exactly the same way everywhere I call it. If a bug turns up in it somewhere then I fix it in ONE place - then all callers get the fix. Ultimate in modular design. To my application the MSR605 IS an object and he doesn't need to know anything about it - he just tells it what to do and it does it. Back 35 years ago we'd have done the same thing - with a library. I "fought" the OO paradigm until I had a chance to really work with it. Once I did a few of these things I now wonder why I fought it so hard - the abstraction really raises the bar on what you can accomplish. Without such "layering" you would be writing all the code to handle the device manually each time. I don't think you are advocating going back there. What I think you're having a problem with is not the layering issue - you're having a people issue! I've worked in a lot of shops where the problems were just like yours. It's not an underlying problem with the technology -it's a lack of organization on the people using it! In the case of my last gig, we had 7 or 8 different functions that performed the same time conversions. If we'd had a "librarian" keeping track of the central code base maybe that wouldn't hav

                      E 1 Reply Last reply
                      0
                      • J Johnny J

                        OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                        J Offline
                        J Offline
                        Jason Christian
                        wrote on last edited by
                        #71

                        An example is simple - I wrote a WPF application. Now there is a desire from our customers to have pieces of it available from the web. Since I separated the UI layer from the business logic layer, it is easy to build some Silverlight pages to show the information through a webpage. If I hadn't separated the layers, I'd basically be re-writing the application. Just because your bone-headed architect went overboard doesn't mean the fundamental idea is flawed - lots of good ideas can suffer through poor implementation. And crappy programmers can screw up code in any genre... that doesn't mean we should all go back to coding in assembly. (And for the record, I spent a good chunk of my career programming in RPG (procedural language) before switching to primarily OO languages, and I much prefer the OO languages).

                        L 1 Reply Last reply
                        0
                        • L Lost User

                          Johnny J. wrote:

                          I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?" crap, because that is never going to happen in 99.9% of systems. Give me an example where someone has actually leaned back at a meeting and calmly said: "THANK GOD that we broke all the foobar code out in a separate foobar layer!" I challenge you - you can't.

                          Sure I can. EASILY. I'm building a time and attendance system. Needed code to handle the badge (magstripe) reader. Built an object to represent the reader. Now in my code I say: 1) Create a reader at port COM6 2) Read card The object knows to do everything associated with reading the badge which includes resetting the device, making sure the COM port is open to it, setting the LED to indicate it's in read mode, taking the swipe and returning 3 tracks of card data to the 3 variables I presented to it. The code associated with those two actions took me a couple of weeks to write and debug. It now exists in a "layer" that I can call with two lines of code. I don't have to worry whether it will work or not, I can create the object anywhere in my application and it will work exactly the same way everywhere I call it. If a bug turns up in it somewhere then I fix it in ONE place - then all callers get the fix. Ultimate in modular design. To my application the MSR605 IS an object and he doesn't need to know anything about it - he just tells it what to do and it does it. Back 35 years ago we'd have done the same thing - with a library. I "fought" the OO paradigm until I had a chance to really work with it. Once I did a few of these things I now wonder why I fought it so hard - the abstraction really raises the bar on what you can accomplish. Without such "layering" you would be writing all the code to handle the device manually each time. I don't think you are advocating going back there. What I think you're having a problem with is not the layering issue - you're having a people issue! I've worked in a lot of shops where the problems were just like yours. It's not an underlying problem with the technology -it's a lack of organization on the people using it! In the case of my last gig, we had 7 or 8 different functions that performed the same time conversions. If we'd had a "librarian" keeping track of the central code base maybe that wouldn't hav

                          E Offline
                          E Offline
                          Euhemerus
                          wrote on last edited by
                          #72

                          Max Peck wrote:

                          Ultimate in modular design

                          So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?

                          Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg

                          J L 2 Replies Last reply
                          0
                          • J Johnny J

                            OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                            K Offline
                            K Offline
                            kevinskibbe
                            wrote on last edited by
                            #73

                            I can provide what I think would be a pretty common example. I work for an online retailer. We have a main retail site, a mobile site, two APIs, and a number of windows services that perform the back end processing that keeps the business chugging. Each of these apps, or clients, tap into a common service layer that in turn uses a common data layer. Layers in this case prevent a massive amount of duplicate code from having to be written and maintained for each app.

                            1 Reply Last reply
                            0
                            • J Johnny J

                              OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                              J Offline
                              J Offline
                              jschell
                              wrote on last edited by
                              #74

                              Johnny J. wrote:

                              And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped.

                              Technology does not and will not solve process problems. Technology does not and will not solve bugs in the architecture/requirements. Technology and will not help solve problems in design that are not specifically technological. Thus for example XML might solve a problem but ONLY if the designer understands the architecture, requirements and both the advantages and disadvantages of XML.

                              Johnny J. wrote:

                              And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system.

                              Again technology will not solve process problems.

                              Johnny J. wrote:

                              I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?"

                              Real personal cases. 1. Application was originally developed to target MS SQL Server because the developers interpreted the known contract language (done via numerous management layers) to suggest database choice didn't matter. Then the customer, via direct communication, insisted that Oracle must be used. The database layer, part of the design, was changed. Absolutely nothing else changed in the application. 2. Application needed to support multiple (30+) interfaces to external service providers using varying sorts of IP protocols. Application was designed with a Plugin layer. Plugins work. 3. Application needed to support 3rd party device with 3rd party proprietary interface code. That code would crash (system exception) at odd times. Application layer interface added which allowed 3rd party code to run in another process managed by the first. Thus it could not crash the original app. (Actually this idiom is the one I now design for always in both C# and Java for 3rd party native code regardless of perceived stability.)

                              1 Reply Last reply
                              0
                              • E Euhemerus

                                Max Peck wrote:

                                Ultimate in modular design

                                So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?

                                Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #75

                                Euhemerus wrote:

                                Everything that can be done in OOP can be done conventionally just as easy.

                                Wrong. I have explicitly written code in C to mimic C++ functionality. And it most definitely was not as "easy". Just as writing assembly to do the same thing as C does it would not be as easy.

                                L 1 Reply Last reply
                                0
                                • E Euhemerus

                                  Max Peck wrote:

                                  Ultimate in modular design

                                  So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?

                                  Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg

                                  L Offline
                                  L Offline
                                  Lost User
                                  wrote on last edited by
                                  #76

                                  Euhemerus wrote:

                                  So would putting the same functionality in a dll file. Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?

                                  Hey. I agree with you conceptually, however I've found that having "objects" provides an abstraction layer that makes it far easier to visualize your concepts. OO is something I really avoided for a long time but when I finally decided to give it a close look I realized that it could be a real weapon to cut problems down-to-size. For example, in my system a "Punch" is an object that represents a moment in time where an employee punched in. The Punch can have many attributes to it besides the moment in time. It can have a department number, it can have a natural and rounded time or date, etc. In my Rules Engine, operating on large groups of these objects would not be possible (without difficulty) if I couldn't express it as an object. Being able to do so has enabled me to produce a rules engine in a FRACTION of the time that the one I used to work with was produced just 15 years ago. The legacy techniques in place at that time created problems that I was simply able to completely avoid because of the object paradigm. Adding "behaviors" to objects also enhances the processing ability. I hear what you're saying - you sound like me about 10 years ago. I carried around with me the attitude that the "old methods" were just as good as the "new methods" are. While this may be true in some ways in others it patently is not. The thing that separates great developers from "so-so" ones is knowing when using a new technology will benefit the design in a positive way without carrying too much baggage with it. Even today (after 35+ years of doing this) I still have a tendency to stick with "old school" methods on things until someone shows me how using something new can vastly improve it. So, as an experiment I decided to write my own rules engine using the knowledge I had accumulated over 12 years in a similar business case. I was absolutely floored at how productive my new code became in just a few short months. A project that I expected to take over a year took a little under 3 months - and the new code is far more understandable than one using "legacy" techniques would have been. I used to have this attitude that if I didn't understand every layer between me and the machine then I couldn't work

                                  E 1 Reply Last reply
                                  0
                                  • J jschell

                                    Euhemerus wrote:

                                    Everything that can be done in OOP can be done conventionally just as easy.

                                    Wrong. I have explicitly written code in C to mimic C++ functionality. And it most definitely was not as "easy". Just as writing assembly to do the same thing as C does it would not be as easy.

                                    L Offline
                                    L Offline
                                    Lost User
                                    wrote on last edited by
                                    #77

                                    Concur. I used to think the same as the OP. I was wrong then too.

                                    1 Reply Last reply
                                    0
                                    • J Johnny J

                                      OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                                      Y Offline
                                      Y Offline
                                      Yortw
                                      wrote on last edited by
                                      #78

                                      Hi, I work on a point of sale system which is heavily OOP based, and the architecture has helped many times in allowing us to customise it for different retailers/markets without changing things for other customers, plugging in new 'device drivers' for POS peripherals, cuts down on development time for new sale and payment types, and has reduced repeated code making bug fixes easier, etc. Having said that, OOP can, like anything else be abused. As our project has grown over time the object model has become less clean and more complex making it harder to learn in the first place. Depending on what you're doing some things are still fairly simple and anyone who is familiar with basic OOP principles can achieve those things without a huge learning curve, but other things are defintely tricky. If I designed the whole thing from scratch there are things I'd do differently to try and simplify, but then that's always the case. My biggest complaint is actually the event-driven code execution you mentioned earlier (and said was fine). DOS apps I wrote 10+ years ago still run fine and I never receive support calls (except to order new rolls of receipt paper for customers in Samoa, apparently it's cheaper to by them here in NZ and ship them to the islands). I can't say that for any Windows/event based application I've written. I suspect that you could still write good procedural code in an OOP manner too, but event driven stuff while simple in theory and not always totally evil, always seems to make things harder or at least less robust, on any major real-world system in my opinion. Sadly, one way or another, we all seem to be doing event driven programming one way or another, even if we've covered it over with OOP. So let's all go back to writing DOS applications in a procedural manner, but with classes and inheritance :laugh: Of course, that's just my 2 cents worth...

                                      L 1 Reply Last reply
                                      0
                                      • J Johnny J

                                        OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?

                                        M Offline
                                        M Offline
                                        mathomp3
                                        wrote on last edited by
                                        #79

                                        Have to say I have stuck with 3 layer programming for a long time and it does its job, of late with Linq and other abstraction layers that "sit off to the side" I find myself using 2 layers, technically its 3 layers, but my data layer is starting to merge at times with my business logic layer. Now the advantage you say? Oh lets see, this past weekend, I took 3 win form apps, altered the front UI's and put them on the Iphone via mono, did the same for android. As another example it was a life saver when a program I was writing needed to accept inputs from oracle, xml, or sql. I didn't have to do anything but build that bottom data layer out for each type. Rest of the code remained untouched. Now I do agree that a bunch of development teams start breaking things down way to much. I call them stick layers. It's when the developers want each layer to contain just a few uses, then if its too wide aka too many uses for a single layer it must mean its time for another layer. It's really ok for your business logic layer to be fat and plump with goodness. You don't have to have it split into 10 different layers. It's a fine line on when and what is the best OOP option and what layering or pattern you want to use. They all have pluses and minuses, it's knowing which one works best for the goal at hand that people fail on.

                                        1 Reply Last reply
                                        0
                                        • L Lost User

                                          Euhemerus wrote:

                                          So would putting the same functionality in a dll file. Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?

                                          Hey. I agree with you conceptually, however I've found that having "objects" provides an abstraction layer that makes it far easier to visualize your concepts. OO is something I really avoided for a long time but when I finally decided to give it a close look I realized that it could be a real weapon to cut problems down-to-size. For example, in my system a "Punch" is an object that represents a moment in time where an employee punched in. The Punch can have many attributes to it besides the moment in time. It can have a department number, it can have a natural and rounded time or date, etc. In my Rules Engine, operating on large groups of these objects would not be possible (without difficulty) if I couldn't express it as an object. Being able to do so has enabled me to produce a rules engine in a FRACTION of the time that the one I used to work with was produced just 15 years ago. The legacy techniques in place at that time created problems that I was simply able to completely avoid because of the object paradigm. Adding "behaviors" to objects also enhances the processing ability. I hear what you're saying - you sound like me about 10 years ago. I carried around with me the attitude that the "old methods" were just as good as the "new methods" are. While this may be true in some ways in others it patently is not. The thing that separates great developers from "so-so" ones is knowing when using a new technology will benefit the design in a positive way without carrying too much baggage with it. Even today (after 35+ years of doing this) I still have a tendency to stick with "old school" methods on things until someone shows me how using something new can vastly improve it. So, as an experiment I decided to write my own rules engine using the knowledge I had accumulated over 12 years in a similar business case. I was absolutely floored at how productive my new code became in just a few short months. A project that I expected to take over a year took a little under 3 months - and the new code is far more understandable than one using "legacy" techniques would have been. I used to have this attitude that if I didn't understand every layer between me and the machine then I couldn't work

                                          E Offline
                                          E Offline
                                          Euhemerus
                                          wrote on last edited by
                                          #80

                                          Don't get me wrong, I'm in no way knocking OOP or the people that use it. My point was that OOP isn't the panacea to productive programming that advocates would have you belive. Below is an article that I sent to my lecturer when I did OOP at college. I must admit, I have to agree with the article's author. By Richard Mansfield September 2005 A Four J’s White Paper Computer programming today is in serious difficulty. It is controlled by what amounts to a quasi-religious cult--Object Oriented Programming (OOP). As a result, productivity is too often the last consideration when programmers are hired to help a business computerize its operations. There’s no evidence that the OOP approach is efficient for most programming jobs. Indeed I know of no serious study comparing traditional, procedure-oriented programming with OOP. But there’s plenty of anecdotal evidence that OOP retards programming efforts. Guarantee confidentiality and programmers will usually tell you that OOP often just makes their job harder. Excuses for OOP failures abound in the workplace: we are “still working on it”; “our databases haven’t yet been reorganized to conform to OOP structural requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read about OOP in a book, you need to work with it for quite a while before you can wrap your mind around it”; and so on and so on. If you question the wisdom of OOP, the response is some version of “you just don’t get it.” But what they’re really saying is: “You just don’t believe in it.” All too often a company hires OOP consultants to solve IT problems, but then that company’s real problems begin. OOP gurus frequently insist on rewriting a company’s existing software according to OOP principles. And once the OOP takeover starts, it can become difficult, sometimes impossible, to replace those OOP people. The company’s programming and even its databases can become so distorted by OOP technology that switching to more efficient alternative programming approaches can be costly but necessary. Bringing in a new group of OOP experts isn’t a solution. They are likely to find it hard to understand what’s going on in the code written by the earlier OOP team. OOP encapsulation (hiding code) and sloppy taxonomic naming practices result in lots of incomprehensible source code. Does anyone benefit from this confusion and inefficiency? When the Java language was first designed, a choice had to be made. Should they mimic the complicated, counter-intuitive punctuation, diction, and syntax us

                                          E L 3 Replies Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups