Mostly true. Duplicate objects, properties and methods are pure death. They are awesome at creating shared data problems, race conditions and making the code generally hard to understand/remember. When this occurs the code becomes inhuman. Pain occurs when coders have to enlarge the window to get a holistic view, so they can simplify down to clean/happy/proper code. This is probably a problem that arises from not having clearly defined and enforced API's. Sometimes I think OO is good when you want to create lots of instances of things(obviously OO has about 10 innovative things but anyway). I think it falls out of database like/type applications which are dominated by accounting/management programs/programmers managing 'bunches' of things. I seem to be able to reuse code written in C or C# if the API's are good. OO helps to enforce this though. Layers are excellent when the standard/protocol was written with a layered mindset. I think of the OSI model(yip blah blah boring and obvious) and how each layer wraps its responsibility up like envelopes in envelopes. But when you start putting logic and firing events from within the layers it gets manic. I don't like it. It is probably one of those occasions you mentioned where coders got to fancy pants and started using all the tricks in the book (new hype) without thinking properly... properly. Maybe layers should be used for cracking/uncracking protocols and 'process blocks'(I made that up- what's the word) with well defined API's should be used to manage logic and events etc. Doesn't feel exactly right but somehow its close. Can anyone else help me out here?
A
ABChing
@ABChing