My first rant in a long time...
-
I like the rant, Johnny, particularly because it led to such great responses. Your pain here was worthy of a rant. I don't think I've ever had to work on a project designed in 7 layers. My mom use to make a great 7-layer bar, and my friend makes a killer 7-layer dip, but a 7-layer app sounds like a nightmare. There is great benefit though for those projects in which multiple presentations or supported devices are required (e.g. a web front-end + a desktop app + a mobile app all having to work with the same underlying business logic) and for the developer that needs or wants to support multiple database engines. When one presentation or data layer can simply be swapped out for another as needed, it is so much easier to support multiple platform use cases (and I would argue easier to develop for them too). But to me there is no question: if one creates layers in a way that isn't smart in design, the resulting app & development work gain none of the advantages of layers but only magnify the disadvantages.
Thanks - I'm very pleased with the responses myself - both the pros and cons...
1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!
-
Waytogo, man! I feel your pain. As I read your rant I was figuring the follow-up comments would argue against; I was pleasantly surprised to find that was not the case. I made myself learn OOP for the obvious reasons. I wrote c code for 15 years and when we needed OOP-like stuff, we wrote it.
GamleKoder wrote:
As I read your rant I was figuring the follow-up comments would argue against; I was pleasantly surprised to find that was not the case.
Me too... :)
1f y0u c4n r34d 7h15 y0u r3411y n33d 70 g37 14!d Gotta run; I've got people to do and things to see... Don't tell my folks I'm a computer programmer - They think I'm a piano player in a cat house... Da mihi sis crustum Etruscum cum omnibus in eo!
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
You are absolutely right. The OO/layering religion has been a MASSIVE failure. I've seen so many systems fail under this weight of this paradigm. It seems that developers like to prove how smart they are by "architecting" systems like this, but then they leave when the $%%&** hits the fan and its impossible to work with the monster they've created.
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Try taking a monolithic application that no one understands, ripping it apart into its constituent atoms, and rebuilding it with a layered architecture. Now THAT will teach you why layering is good paradigm. FWIW, layering is also a good idea when you want to account for future technologies/growth/expansion, so one day you can replace your C++ GUI front end desktop app with a Silverlight GUI fron end web app without needing to also change the underlying "business layer" code. OOP may encourage and provide sound approaches for good software design, but it neither garauntees nor requires it.
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Johnny J. wrote:
I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?" crap, because that is never going to happen in 99.9% of systems. Give me an example where someone has actually leaned back at a meeting and calmly said: "THANK GOD that we broke all the foobar code out in a separate foobar layer!" I challenge you - you can't.
Sure I can. EASILY. I'm building a time and attendance system. Needed code to handle the badge (magstripe) reader. Built an object to represent the reader. Now in my code I say: 1) Create a reader at port COM6 2) Read card The object knows to do everything associated with reading the badge which includes resetting the device, making sure the COM port is open to it, setting the LED to indicate it's in read mode, taking the swipe and returning 3 tracks of card data to the 3 variables I presented to it. The code associated with those two actions took me a couple of weeks to write and debug. It now exists in a "layer" that I can call with two lines of code. I don't have to worry whether it will work or not, I can create the object anywhere in my application and it will work exactly the same way everywhere I call it. If a bug turns up in it somewhere then I fix it in ONE place - then all callers get the fix. Ultimate in modular design. To my application the MSR605 IS an object and he doesn't need to know anything about it - he just tells it what to do and it does it. Back 35 years ago we'd have done the same thing - with a library. I "fought" the OO paradigm until I had a chance to really work with it. Once I did a few of these things I now wonder why I fought it so hard - the abstraction really raises the bar on what you can accomplish. Without such "layering" you would be writing all the code to handle the device manually each time. I don't think you are advocating going back there. What I think you're having a problem with is not the layering issue - you're having a people issue! I've worked in a lot of shops where the problems were just like yours. It's not an underlying problem with the technology -it's a lack of organization on the people using it! In the case of my last gig, we had 7 or 8 different functions that performed the same time conversions. If we'd had a "librarian" keeping track of the central code base maybe that wouldn't hav
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
An example is simple - I wrote a WPF application. Now there is a desire from our customers to have pieces of it available from the web. Since I separated the UI layer from the business logic layer, it is easy to build some Silverlight pages to show the information through a webpage. If I hadn't separated the layers, I'd basically be re-writing the application. Just because your bone-headed architect went overboard doesn't mean the fundamental idea is flawed - lots of good ideas can suffer through poor implementation. And crappy programmers can screw up code in any genre... that doesn't mean we should all go back to coding in assembly. (And for the record, I spent a good chunk of my career programming in RPG (procedural language) before switching to primarily OO languages, and I much prefer the OO languages).
-
Johnny J. wrote:
I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?" crap, because that is never going to happen in 99.9% of systems. Give me an example where someone has actually leaned back at a meeting and calmly said: "THANK GOD that we broke all the foobar code out in a separate foobar layer!" I challenge you - you can't.
Sure I can. EASILY. I'm building a time and attendance system. Needed code to handle the badge (magstripe) reader. Built an object to represent the reader. Now in my code I say: 1) Create a reader at port COM6 2) Read card The object knows to do everything associated with reading the badge which includes resetting the device, making sure the COM port is open to it, setting the LED to indicate it's in read mode, taking the swipe and returning 3 tracks of card data to the 3 variables I presented to it. The code associated with those two actions took me a couple of weeks to write and debug. It now exists in a "layer" that I can call with two lines of code. I don't have to worry whether it will work or not, I can create the object anywhere in my application and it will work exactly the same way everywhere I call it. If a bug turns up in it somewhere then I fix it in ONE place - then all callers get the fix. Ultimate in modular design. To my application the MSR605 IS an object and he doesn't need to know anything about it - he just tells it what to do and it does it. Back 35 years ago we'd have done the same thing - with a library. I "fought" the OO paradigm until I had a chance to really work with it. Once I did a few of these things I now wonder why I fought it so hard - the abstraction really raises the bar on what you can accomplish. Without such "layering" you would be writing all the code to handle the device manually each time. I don't think you are advocating going back there. What I think you're having a problem with is not the layering issue - you're having a people issue! I've worked in a lot of shops where the problems were just like yours. It's not an underlying problem with the technology -it's a lack of organization on the people using it! In the case of my last gig, we had 7 or 8 different functions that performed the same time conversions. If we'd had a "librarian" keeping track of the central code base maybe that wouldn't hav
Max Peck wrote:
Ultimate in modular design
So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?
Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
I can provide what I think would be a pretty common example. I work for an online retailer. We have a main retail site, a mobile site, two APIs, and a number of windows services that perform the back end processing that keeps the business chugging. Each of these apps, or clients, tap into a common service layer that in turn uses a common data layer. Layers in this case prevent a massive amount of duplicate code from having to be written and maintained for each app.
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Johnny J. wrote:
And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped.
Technology does not and will not solve process problems. Technology does not and will not solve bugs in the architecture/requirements. Technology and will not help solve problems in design that are not specifically technological. Thus for example XML might solve a problem but ONLY if the designer understands the architecture, requirements and both the advantages and disadvantages of XML.
Johnny J. wrote:
And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system.
Again technology will not solve process problems.
Johnny J. wrote:
I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage? I'm not talking about hypothetics like "Oh, what if we have to change to another type of database?"
Real personal cases. 1. Application was originally developed to target MS SQL Server because the developers interpreted the known contract language (done via numerous management layers) to suggest database choice didn't matter. Then the customer, via direct communication, insisted that Oracle must be used. The database layer, part of the design, was changed. Absolutely nothing else changed in the application. 2. Application needed to support multiple (30+) interfaces to external service providers using varying sorts of IP protocols. Application was designed with a Plugin layer. Plugins work. 3. Application needed to support 3rd party device with 3rd party proprietary interface code. That code would crash (system exception) at odd times. Application layer interface added which allowed 3rd party code to run in another process managed by the first. Thus it could not crash the original app. (Actually this idiom is the one I now design for always in both C# and Java for 3rd party native code regardless of perceived stability.)
-
Max Peck wrote:
Ultimate in modular design
So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?
Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg
Euhemerus wrote:
Everything that can be done in OOP can be done conventionally just as easy.
Wrong. I have explicitly written code in C to mimic C++ functionality. And it most definitely was not as "easy". Just as writing assembly to do the same thing as C does it would not be as easy.
-
Max Peck wrote:
Ultimate in modular design
So would putting the same functionality in a dll file. ;P Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?
Nobody can get the truth out of me because even I don't know what it is. I keep myself in a constant state of utter confusion. - Col. Flagg
Euhemerus wrote:
So would putting the same functionality in a dll file. Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?
Hey. I agree with you conceptually, however I've found that having "objects" provides an abstraction layer that makes it far easier to visualize your concepts. OO is something I really avoided for a long time but when I finally decided to give it a close look I realized that it could be a real weapon to cut problems down-to-size. For example, in my system a "Punch" is an object that represents a moment in time where an employee punched in. The Punch can have many attributes to it besides the moment in time. It can have a department number, it can have a natural and rounded time or date, etc. In my Rules Engine, operating on large groups of these objects would not be possible (without difficulty) if I couldn't express it as an object. Being able to do so has enabled me to produce a rules engine in a FRACTION of the time that the one I used to work with was produced just 15 years ago. The legacy techniques in place at that time created problems that I was simply able to completely avoid because of the object paradigm. Adding "behaviors" to objects also enhances the processing ability. I hear what you're saying - you sound like me about 10 years ago. I carried around with me the attitude that the "old methods" were just as good as the "new methods" are. While this may be true in some ways in others it patently is not. The thing that separates great developers from "so-so" ones is knowing when using a new technology will benefit the design in a positive way without carrying too much baggage with it. Even today (after 35+ years of doing this) I still have a tendency to stick with "old school" methods on things until someone shows me how using something new can vastly improve it. So, as an experiment I decided to write my own rules engine using the knowledge I had accumulated over 12 years in a similar business case. I was absolutely floored at how productive my new code became in just a few short months. A project that I expected to take over a year took a little under 3 months - and the new code is far more understandable than one using "legacy" techniques would have been. I used to have this attitude that if I didn't understand every layer between me and the machine then I couldn't work
-
Euhemerus wrote:
Everything that can be done in OOP can be done conventionally just as easy.
Wrong. I have explicitly written code in C to mimic C++ functionality. And it most definitely was not as "easy". Just as writing assembly to do the same thing as C does it would not be as easy.
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Hi, I work on a point of sale system which is heavily OOP based, and the architecture has helped many times in allowing us to customise it for different retailers/markets without changing things for other customers, plugging in new 'device drivers' for POS peripherals, cuts down on development time for new sale and payment types, and has reduced repeated code making bug fixes easier, etc. Having said that, OOP can, like anything else be abused. As our project has grown over time the object model has become less clean and more complex making it harder to learn in the first place. Depending on what you're doing some things are still fairly simple and anyone who is familiar with basic OOP principles can achieve those things without a huge learning curve, but other things are defintely tricky. If I designed the whole thing from scratch there are things I'd do differently to try and simplify, but then that's always the case. My biggest complaint is actually the event-driven code execution you mentioned earlier (and said was fine). DOS apps I wrote 10+ years ago still run fine and I never receive support calls (except to order new rolls of receipt paper for customers in Samoa, apparently it's cheaper to by them here in NZ and ship them to the islands). I can't say that for any Windows/event based application I've written. I suspect that you could still write good procedural code in an OOP manner too, but event driven stuff while simple in theory and not always totally evil, always seems to make things harder or at least less robust, on any major real-world system in my opinion. Sadly, one way or another, we all seem to be doing event driven programming one way or another, even if we've covered it over with OOP. So let's all go back to writing DOS applications in a procedural manner, but with classes and inheritance :laugh: Of course, that's just my 2 cents worth...
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Have to say I have stuck with 3 layer programming for a long time and it does its job, of late with Linq and other abstraction layers that "sit off to the side" I find myself using 2 layers, technically its 3 layers, but my data layer is starting to merge at times with my business logic layer. Now the advantage you say? Oh lets see, this past weekend, I took 3 win form apps, altered the front UI's and put them on the Iphone via mono, did the same for android. As another example it was a life saver when a program I was writing needed to accept inputs from oracle, xml, or sql. I didn't have to do anything but build that bottom data layer out for each type. Rest of the code remained untouched. Now I do agree that a bunch of development teams start breaking things down way to much. I call them stick layers. It's when the developers want each layer to contain just a few uses, then if its too wide aka too many uses for a single layer it must mean its time for another layer. It's really ok for your business logic layer to be fat and plump with goodness. You don't have to have it split into 10 different layers. It's a fine line on when and what is the best OOP option and what layering or pattern you want to use. They all have pluses and minuses, it's knowing which one works best for the goal at hand that people fail on.
-
Euhemerus wrote:
So would putting the same functionality in a dll file. Everything that can be done in OOP can be done conventionally just as easy. How do you think operating systems were written before some bright spark came up with the idea of OOP?
Hey. I agree with you conceptually, however I've found that having "objects" provides an abstraction layer that makes it far easier to visualize your concepts. OO is something I really avoided for a long time but when I finally decided to give it a close look I realized that it could be a real weapon to cut problems down-to-size. For example, in my system a "Punch" is an object that represents a moment in time where an employee punched in. The Punch can have many attributes to it besides the moment in time. It can have a department number, it can have a natural and rounded time or date, etc. In my Rules Engine, operating on large groups of these objects would not be possible (without difficulty) if I couldn't express it as an object. Being able to do so has enabled me to produce a rules engine in a FRACTION of the time that the one I used to work with was produced just 15 years ago. The legacy techniques in place at that time created problems that I was simply able to completely avoid because of the object paradigm. Adding "behaviors" to objects also enhances the processing ability. I hear what you're saying - you sound like me about 10 years ago. I carried around with me the attitude that the "old methods" were just as good as the "new methods" are. While this may be true in some ways in others it patently is not. The thing that separates great developers from "so-so" ones is knowing when using a new technology will benefit the design in a positive way without carrying too much baggage with it. Even today (after 35+ years of doing this) I still have a tendency to stick with "old school" methods on things until someone shows me how using something new can vastly improve it. So, as an experiment I decided to write my own rules engine using the knowledge I had accumulated over 12 years in a similar business case. I was absolutely floored at how productive my new code became in just a few short months. A project that I expected to take over a year took a little under 3 months - and the new code is far more understandable than one using "legacy" techniques would have been. I used to have this attitude that if I didn't understand every layer between me and the machine then I couldn't work
Don't get me wrong, I'm in no way knocking OOP or the people that use it. My point was that OOP isn't the panacea to productive programming that advocates would have you belive. Below is an article that I sent to my lecturer when I did OOP at college. I must admit, I have to agree with the article's author. By Richard Mansfield September 2005 A Four J’s White Paper Computer programming today is in serious difficulty. It is controlled by what amounts to a quasi-religious cult--Object Oriented Programming (OOP). As a result, productivity is too often the last consideration when programmers are hired to help a business computerize its operations. There’s no evidence that the OOP approach is efficient for most programming jobs. Indeed I know of no serious study comparing traditional, procedure-oriented programming with OOP. But there’s plenty of anecdotal evidence that OOP retards programming efforts. Guarantee confidentiality and programmers will usually tell you that OOP often just makes their job harder. Excuses for OOP failures abound in the workplace: we are “still working on it”; “our databases haven’t yet been reorganized to conform to OOP structural requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read about OOP in a book, you need to work with it for quite a while before you can wrap your mind around it”; and so on and so on. If you question the wisdom of OOP, the response is some version of “you just don’t get it.” But what they’re really saying is: “You just don’t believe in it.” All too often a company hires OOP consultants to solve IT problems, but then that company’s real problems begin. OOP gurus frequently insist on rewriting a company’s existing software according to OOP principles. And once the OOP takeover starts, it can become difficult, sometimes impossible, to replace those OOP people. The company’s programming and even its databases can become so distorted by OOP technology that switching to more efficient alternative programming approaches can be costly but necessary. Bringing in a new group of OOP experts isn’t a solution. They are likely to find it hard to understand what’s going on in the code written by the earlier OOP team. OOP encapsulation (hiding code) and sloppy taxonomic naming practices result in lots of incomprehensible source code. Does anyone benefit from this confusion and inefficiency? When the Java language was first designed, a choice had to be made. Should they mimic the complicated, counter-intuitive punctuation, diction, and syntax us
-
Don't get me wrong, I'm in no way knocking OOP or the people that use it. My point was that OOP isn't the panacea to productive programming that advocates would have you belive. Below is an article that I sent to my lecturer when I did OOP at college. I must admit, I have to agree with the article's author. By Richard Mansfield September 2005 A Four J’s White Paper Computer programming today is in serious difficulty. It is controlled by what amounts to a quasi-religious cult--Object Oriented Programming (OOP). As a result, productivity is too often the last consideration when programmers are hired to help a business computerize its operations. There’s no evidence that the OOP approach is efficient for most programming jobs. Indeed I know of no serious study comparing traditional, procedure-oriented programming with OOP. But there’s plenty of anecdotal evidence that OOP retards programming efforts. Guarantee confidentiality and programmers will usually tell you that OOP often just makes their job harder. Excuses for OOP failures abound in the workplace: we are “still working on it”; “our databases haven’t yet been reorganized to conform to OOP structural requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read about OOP in a book, you need to work with it for quite a while before you can wrap your mind around it”; and so on and so on. If you question the wisdom of OOP, the response is some version of “you just don’t get it.” But what they’re really saying is: “You just don’t believe in it.” All too often a company hires OOP consultants to solve IT problems, but then that company’s real problems begin. OOP gurus frequently insist on rewriting a company’s existing software according to OOP principles. And once the OOP takeover starts, it can become difficult, sometimes impossible, to replace those OOP people. The company’s programming and even its databases can become so distorted by OOP technology that switching to more efficient alternative programming approaches can be costly but necessary. Bringing in a new group of OOP experts isn’t a solution. They are likely to find it hard to understand what’s going on in the code written by the earlier OOP team. OOP encapsulation (hiding code) and sloppy taxonomic naming practices result in lots of incomprehensible source code. Does anyone benefit from this confusion and inefficiency? When the Java language was first designed, a choice had to be made. Should they mimic the complicated, counter-intuitive punctuation, diction, and syntax us
-
Don't get me wrong, I'm in no way knocking OOP or the people that use it. My point was that OOP isn't the panacea to productive programming that advocates would have you belive. Below is an article that I sent to my lecturer when I did OOP at college. I must admit, I have to agree with the article's author. By Richard Mansfield September 2005 A Four J’s White Paper Computer programming today is in serious difficulty. It is controlled by what amounts to a quasi-religious cult--Object Oriented Programming (OOP). As a result, productivity is too often the last consideration when programmers are hired to help a business computerize its operations. There’s no evidence that the OOP approach is efficient for most programming jobs. Indeed I know of no serious study comparing traditional, procedure-oriented programming with OOP. But there’s plenty of anecdotal evidence that OOP retards programming efforts. Guarantee confidentiality and programmers will usually tell you that OOP often just makes their job harder. Excuses for OOP failures abound in the workplace: we are “still working on it”; “our databases haven’t yet been reorganized to conform to OOP structural requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read about OOP in a book, you need to work with it for quite a while before you can wrap your mind around it”; and so on and so on. If you question the wisdom of OOP, the response is some version of “you just don’t get it.” But what they’re really saying is: “You just don’t believe in it.” All too often a company hires OOP consultants to solve IT problems, but then that company’s real problems begin. OOP gurus frequently insist on rewriting a company’s existing software according to OOP principles. And once the OOP takeover starts, it can become difficult, sometimes impossible, to replace those OOP people. The company’s programming and even its databases can become so distorted by OOP technology that switching to more efficient alternative programming approaches can be costly but necessary. Bringing in a new group of OOP experts isn’t a solution. They are likely to find it hard to understand what’s going on in the code written by the earlier OOP team. OOP encapsulation (hiding code) and sloppy taxonomic naming practices result in lots of incomprehensible source code. Does anyone benefit from this confusion and inefficiency? When the Java language was first designed, a choice had to be made. Should they mimic the complicated, counter-intuitive punctuation, diction, and syntax us
Looks like it will be an interesting read ... I'll work my way through it tonight. I follow your point about it becoming a "religion". Well, yeah, people can get religious about *anything*. Ever got into an argument about text editors? Ever follow one of those Linux vs. Windows arguments? Talk about religious fervor (particularly the Linux guys). I agree that people get to a point where they say "this [meaning OOP] is the ONLY WAY to make this work" when that may not be the case. The original rant, though, seemed to be "religious" in the other direction; I.E. "I don't want no LAYERS and anything that's LAYERED is EVIL". Heh... Programming with OOP techniques is, to me, like anything else. If there's a tool there that I can use to make my code more operational I'll check it out. So, on that front you wouldn't call me a "purely" OOP developer - I'm just a developer that uses certain features of OOP where they make sense just like I'll use a socket wrench instead of my bare hands when it makes sense. There are also religionists that think programming for the Web (I.E. Web Browsers) is the ONLY thing to do also. I don't believe that either. I prefer a Windows Client any day of the week. Connected across the net? Sure thing. Running in a browser? Naah. Make sense? ;-) -Max :D
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
This is very true in the industry. I have 20 years developent experience, and my OO experience started with C++, continued with Delphi, and now includes C#. IMHO, almost all the modern large projects I have worked in (using C#) suffer ridiculous over-layering and over-abstraction, to the point where absolutely nobody outside the lead designer has any idea how it all works. This seems to be so common that I can only imagine it is being taught at Universities and filtered through to those in the field. The main issue with this is that many of the lead developers seem not to know how to use their brains to determine whether a particular idea is actually useful to the project, rather than implementing, almost by rote, what they seem to know. It seems that many, if not the majority, of software developers these days do not seem to know how to think. Thinking takes time, and you cannot churn out code while you are thinking. I find it strange to watch some developers hitting their keyboard anywhere up to 90% of the time they are sitting at their desk. For me this is rarely above 10%. My best work is done walking the dog, taking a shower, etc, when I cannot hit the keyboard, where I can let my mind wander and come up with different ideas about tacking existing problems, unconstrained by having to code it straight away.
-
Wow, what an informed, considered and articulate opinion that is. (The nasty surprise). Again, someone who evidently doesn't understand OOP ranting about it. The idiot even classes C as an OOP language.
modified on Wednesday, January 26, 2011 6:46 AM
Rob Grainger says: "Again, someone who evidently doesn't understand OOP ranting about it." I'm sure I just saw that same statement somewhere a few days ago. Funny; history repeats itself. Thanks for the comment, Rob. I hope you don't mind me quoting you on that web page. Whether you disagree with my statements or not, at least you read far enough through my diatribe to get to the statement about C#, C++, and C; and I have to commend you for taking the time to do that.
-
OK, so here's the deal: I've been programming for many years now. In the beginning was sequential code execution. Then, with the introduction of Windows we got event driven code execution. So far so good. Then some day some jackass thought to himself: "This is too simple, let's complicate it further and invent Object Orientation. And just as a bonus, let's get everybody to divide their code in a huge amount of layers so that noone will be able to get the full view of everything." And now, everybody's doing precisely that. And if that's not enough, all companies have their OWN understanding of how the object oriented structure and the layering should look and feel. And the developers want everybody to know that they understand the latest technologiest so they throw in every new hype they can think of even when it's completely unnecessary. I hate it. It's a major misunderstanding. I know I'm going to get downvoted for this, but that can't be helped. The last three projects I've worked on have been totally over-OO'd and layered to death. In all of the cases, I have not been a part of the initial development, but steeped in and developed on a running system. The current project I'm working on is by far the worst. The system architecht one day proudly told me that the code was divided in SEVEN different layers. To be honest, I can't tell if he's right or not, because I can't be bothered to verify it. The codebase is growing, because not even the developers that did it in the first place can keep track of everything, so duplicate objects, properties and methods are constantly written because noone knows that they exist already. And of course, noone can clean it up either because after some time noone knows what is dead code and what code actually serves a purpose. And the system is slow as hell, and it's growing slower each days as more users use it and more data is added to the system. I'm not the least bit surprised, with each code call having to pass back and forth through seven layers, some of them connected by WCF (for scalability, hah - that's the biggest joke!) and the fact that each time you want to know something, you fill a list of objects with perhaps 30 properties (and child objects) when you're really only interested in one or two. And that's just a couple of examples... In all of my latest projects, the code has been layered in at least 3 layers - and for what bleedin' use? I have a task for you: Can anyone give me ONE example where layered code has proved to be a major advantage?
Mostly true. Duplicate objects, properties and methods are pure death. They are awesome at creating shared data problems, race conditions and making the code generally hard to understand/remember. When this occurs the code becomes inhuman. Pain occurs when coders have to enlarge the window to get a holistic view, so they can simplify down to clean/happy/proper code. This is probably a problem that arises from not having clearly defined and enforced API's. Sometimes I think OO is good when you want to create lots of instances of things(obviously OO has about 10 innovative things but anyway). I think it falls out of database like/type applications which are dominated by accounting/management programs/programmers managing 'bunches' of things. I seem to be able to reuse code written in C or C# if the API's are good. OO helps to enforce this though. Layers are excellent when the standard/protocol was written with a layered mindset. I think of the OSI model(yip blah blah boring and obvious) and how each layer wraps its responsibility up like envelopes in envelopes. But when you start putting logic and firing events from within the layers it gets manic. I don't like it. It is probably one of those occasions you mentioned where coders got to fancy pants and started using all the tricks in the book (new hype) without thinking properly... properly. Maybe layers should be used for cracking/uncracking protocols and 'process blocks'(I made that up- what's the word) with well defined API's should be used to manage logic and events etc. Doesn't feel exactly right but somehow its close. Can anyone else help me out here?