The Software Architecture Demon
-
OK, I had to look up UML...heard of it long ago and found it useless. :laugh: I'd rather keep these diagrams in my head...much easier to update/maintain! IMHO, a good database design tells the whole story. Most abstraction can be handled in views/procs. (talking lob apps here) BTW, in 20+ years I've never seen a specifications document/plan. The closest thing might be an occasional UI mockup in a screen grab or worse, scribbled on a notepad. (or even worse, a screen grab of an image of scribbling on a notepad! :| )
"Go forth into the source" - Neal Morse "Hope is contagious"
Yeah it's a little different when you're not doing business apps. IoT devices, developer tools, that sort of thing, you don't have a database to go by necessarily. Although I'd also argue that any validation those procedures are doing in them should be done on the front end as well to avoid bad/spurious network traffic. If they are well designed, hopefully they add to the story. :)
Real programmers use butterflies
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
Quote:
decoupling software from itself
I like that phrase! Though I wish, when it comes to people, some people were more decoupled from themselves, and others less decoupled. :laugh:
Latest Articles:
Thread Safe Quantized Temporal Frame Ring Buffer -
Quote:
decoupling software from itself
I like that phrase! Though I wish, when it comes to people, some people were more decoupled from themselves, and others less decoupled. :laugh:
Latest Articles:
Thread Safe Quantized Temporal Frame Ring Buffer -
Yes to this. Glad I have some support here. Everyone but you and Sander are all sideeying me now. :laugh:
Real programmers use butterflies
honey the codewitch wrote:
Everyone but you and Sander are all sideeying me now.
Nah we're not. I've thought similarly for a while now. Sticky-tape solutions are appropriate in all sorts of places. Slapping a newsletter on the fridge? Sticky-tape. Putting up a car-port? Bolts. How much of the world does it Slapping a newsletter on the fridge? Measure the thickness of the door's steel, weigh the newsletter, calculate load-bearing ability of door skin, add reinforcement to handle larger photos in the future, drill and countersink holes, punch holes in corner of picture, use supplied allen-key to fasten bolts that secure the pic.
-
honey the codewitch wrote:
not spend odd hours studying someone else's work just so they can use it
Part of the architecture is to structure the system, or the code, exactly so that people who want to do this can do it, and are not bothered with higher level topics.
honey the codewitch wrote:
I'm also going to come out and say it makes things harder to maintain
No. Over-engineered code or undocumented code is hard to maintain, whether it has been created based on highly sophisticated design patterns and architecture principles or "by hand", but you cannot say that using architecture design always makes code harder to maintain. 15 year old multi threaded spaghetti code resulting from a 15-year-old-company-time one guy developer show is hard to maintain. Always. Actually, UML or SysML are tools, and as every tool, they should be used adequately to fulfil a certain purpose to make sense. I agree that using a tool just because you can is not a good strategy, but on the other side and like any tool, they can come very handy if well used.
Thing is I hardly ever see development task that you could do and "not be bothered with higher level topics". From my experience, you have vertical integration in the system from fronted to database and to implement a feature that is useful for a user you have to have insights in all those layers. Of course there are some local fixes, but usually you affect some other part anyway. For most of other stuff you have to have insight what user will do, what business wants to achieve, what is general direction of a system architecture.
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
My take on this is that people feel that if they won't "foresee and prevent" some issues like duplication of code they are lesser coders. I try to explain that we should use rule of 3, so if it is in 2 places that is still not duplication of code. But what I get in code reviews and discussions: "you violate DRY" like it would be some holy grail and you are lesser human if you have 2 lines that look alike. Reality is, that this emotional problem not technical. Usually people want to do a good job or be better than others in their work. I can point out 10 logical reasons for why those 2 lines should not be changed into single line, but it is still not going to convince someones pride.
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
-
Thing is I hardly ever see development task that you could do and "not be bothered with higher level topics". From my experience, you have vertical integration in the system from fronted to database and to implement a feature that is useful for a user you have to have insights in all those layers. Of course there are some local fixes, but usually you affect some other part anyway. For most of other stuff you have to have insight what user will do, what business wants to achieve, what is general direction of a system architecture.
Ever worked for embedded world (with multiple layer ofSW from different companies) or for DoD (where SW developer A does not know what the guy sitting next to him is coding for) ?
-
I agree to a point.
Rage wrote:
Part of the architecture is to structure the system, or the code, exactly so that people who want to do this can do it, and are not bothered with higher level topics.
This is how it should be. In my professional experience it was sometimes the case that a software project would be designed appropriately for its size and the team situation. In many cases, it simply wasn't. People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time. The design would end up taking up the majority of the bandwidth even well past the design phase after the project was supposed to be nailed down. I've seen projects deathmarch over it even. Basically the project was thought to death. Is it as common as badly designed or simply undesigned software? No. Is it destructive and harmful to projects? Yes! I guess to sound cliche it's about moderation. You have to make the design appropriate for a project. I'm not dismissing UML entirely either. But it's is one of those things that strikes as having the perception of being far more useful than it actually is.
Real programmers use butterflies
honey the codewitch wrote:
People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time.
There are also people who religiously follow a template procedure for coding, irrespective of how the project is currently organised. For example, in a project which uses OOP practices - so normally if you have a Widget id and want the Widget object, you'd call the static method Widget.Find(id) - I've worked with people who write an IWidgetFinder interface, then a WidgetFinder class with a constructor which takes a delegate function to handle errors; so when it's called you first instantiate the WidgetFinder with the error handler, then you can call WindgetFinder.Find(id)! All this repeated for dozens of trivial functions with interfaces which are only ever going to be used by one class and classes that are only used from one place in the project with the same error handling that's used everywhere! And, in this project, much of the time the end result comes down to an EF call like...
DBcontext.Widgets.Where(w => w.id == id).First();
-
honey the codewitch wrote:
People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time.
There are also people who religiously follow a template procedure for coding, irrespective of how the project is currently organised. For example, in a project which uses OOP practices - so normally if you have a Widget id and want the Widget object, you'd call the static method Widget.Find(id) - I've worked with people who write an IWidgetFinder interface, then a WidgetFinder class with a constructor which takes a delegate function to handle errors; so when it's called you first instantiate the WidgetFinder with the error handler, then you can call WindgetFinder.Find(id)! All this repeated for dozens of trivial functions with interfaces which are only ever going to be used by one class and classes that are only used from one place in the project with the same error handling that's used everywhere! And, in this project, much of the time the end result comes down to an EF call like...
DBcontext.Widgets.Where(w => w.id == id).First();
YES! This kind of thing. It's unnecessary. Code should be as simple as it can be and no simpler.
Real programmers use butterflies
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
After 5 years of academic research on the subject and 6 years of commercial R&D as primarly a software architect, my conclusions are very similar to yours. A low learning curve and straightforward structure that only slowly gains complexity over time, is the best possible outcome. Now, for the last 2 years of working in an almost-enterprise level company, I also notice that almost no-one correctly values that conclusion. Some scoff at the simplicity, and take it as a personal challenge of sorts, because they've been openly passionate about more intricate solutions in the past. Some drastically undervalue the effort involved, as they equate "simple" with "not a lot of work" and immediately try to displace it with, somewhat ironically, an off-the-cuff idea that's incomplete and more complex to execute. Let me give you some advice on how to deal with these people. Instead of explaining why a low learning curve is an integral part of a good design, let them present a small practical example of their own proposal, and give them an audience that judges them on how easy it is to understand, and how easy it will be to maintain. In my experience, most people take on the bait in a heartbeat. About half quit once they realize their mistake (they tend to recognize the 1+ pages of "having to explain basic stuff first" as a failure and a lesson in humility) and the ones that do present their solution, often feel embarrassed about the whole thing once they realize no-one really understands the words they are saying. Lessons will be learned and egos will be bruised. Keep a respectful tone throughout and you'll manage just fine.
-
After 5 years of academic research on the subject and 6 years of commercial R&D as primarly a software architect, my conclusions are very similar to yours. A low learning curve and straightforward structure that only slowly gains complexity over time, is the best possible outcome. Now, for the last 2 years of working in an almost-enterprise level company, I also notice that almost no-one correctly values that conclusion. Some scoff at the simplicity, and take it as a personal challenge of sorts, because they've been openly passionate about more intricate solutions in the past. Some drastically undervalue the effort involved, as they equate "simple" with "not a lot of work" and immediately try to displace it with, somewhat ironically, an off-the-cuff idea that's incomplete and more complex to execute. Let me give you some advice on how to deal with these people. Instead of explaining why a low learning curve is an integral part of a good design, let them present a small practical example of their own proposal, and give them an audience that judges them on how easy it is to understand, and how easy it will be to maintain. In my experience, most people take on the bait in a heartbeat. About half quit once they realize their mistake (they tend to recognize the 1+ pages of "having to explain basic stuff first" as a failure and a lesson in humility) and the ones that do present their solution, often feel embarrassed about the whole thing once they realize no-one really understands the words they are saying. Lessons will be learned and egos will be bruised. Keep a respectful tone throughout and you'll manage just fine.
That's some great advice. I'm freelance now doing IoT stuff so there's no room for GoF patterns and UML in my code - simple is king, and I love it but I'll definitely give what you said a try should I find myself in that role again.
Real programmers use butterflies
-
Ever worked for embedded world (with multiple layer ofSW from different companies) or for DoD (where SW developer A does not know what the guy sitting next to him is coding for) ?
I worked in a couple of places but maybe because I would not fit in "just code that, no questions asked" approach I am biased.
-
Sometimes its not that there is any choice :) The Expert (Short Comedy Sketch) - YouTube[^]
(hence) Never use technical words for pictures. Never tell customers, or developers, that the picture is UML - it's just a picture ;-) Guidance of the wise...
-
After 5 years of academic research on the subject and 6 years of commercial R&D as primarly a software architect, my conclusions are very similar to yours. A low learning curve and straightforward structure that only slowly gains complexity over time, is the best possible outcome. Now, for the last 2 years of working in an almost-enterprise level company, I also notice that almost no-one correctly values that conclusion. Some scoff at the simplicity, and take it as a personal challenge of sorts, because they've been openly passionate about more intricate solutions in the past. Some drastically undervalue the effort involved, as they equate "simple" with "not a lot of work" and immediately try to displace it with, somewhat ironically, an off-the-cuff idea that's incomplete and more complex to execute. Let me give you some advice on how to deal with these people. Instead of explaining why a low learning curve is an integral part of a good design, let them present a small practical example of their own proposal, and give them an audience that judges them on how easy it is to understand, and how easy it will be to maintain. In my experience, most people take on the bait in a heartbeat. About half quit once they realize their mistake (they tend to recognize the 1+ pages of "having to explain basic stuff first" as a failure and a lesson in humility) and the ones that do present their solution, often feel embarrassed about the whole thing once they realize no-one really understands the words they are saying. Lessons will be learned and egos will be bruised. Keep a respectful tone throughout and you'll manage just fine.
"
Quote:
let them present a small practical example of their own proposal, and give them an audience that judges them
The usual problem is the way those more senior do it to the architect, having set out a basic structure and asking for the gaps to be filled. But you are right, it's best if you can manage to explain the complexities and difficulties first, before showing the excellence and simplicity of the system design!
-
I worked on a project where the developer followed a lot of patterns to the point where there just too many layers. It makes maintenance difficult. There's a balance between having a system easily extendable (not a monolith) and having too many layers.
I totally agree. This is pretty much what i was ranting about.
Real programmers use butterflies
-
My take on this is that people feel that if they won't "foresee and prevent" some issues like duplication of code they are lesser coders. I try to explain that we should use rule of 3, so if it is in 2 places that is still not duplication of code. But what I get in code reviews and discussions: "you violate DRY" like it would be some holy grail and you are lesser human if you have 2 lines that look alike. Reality is, that this emotional problem not technical. Usually people want to do a good job or be better than others in their work. I can point out 10 logical reasons for why those 2 lines should not be changed into single line, but it is still not going to convince someones pride.
Yeah I hear that. Part of me finds some amusement in this. Humility is the seat of wisdom. Pride makes people do all kinds of foolish things.
Real programmers use butterflies
-
honey the codewitch wrote:
Everyone but you and Sander are all sideeying me now.
Nah we're not. I've thought similarly for a while now. Sticky-tape solutions are appropriate in all sorts of places. Slapping a newsletter on the fridge? Sticky-tape. Putting up a car-port? Bolts. How much of the world does it Slapping a newsletter on the fridge? Measure the thickness of the door's steel, weigh the newsletter, calculate load-bearing ability of door skin, add reinforcement to handle larger photos in the future, drill and countersink holes, punch holes in corner of picture, use supplied allen-key to fasten bolts that secure the pic.
Absolutely sensible.
Real programmers use butterflies
-
OK, I had to look up UML...heard of it long ago and found it useless. :laugh: I'd rather keep these diagrams in my head...much easier to update/maintain! IMHO, a good database design tells the whole story. Most abstraction can be handled in views/procs. (talking lob apps here) BTW, in 20+ years I've never seen a specifications document/plan. The closest thing might be an occasional UI mockup in a screen grab or worse, scribbled on a notepad. (or even worse, a screen grab of an image of scribbling on a notepad! :| )
"Go forth into the source" - Neal Morse "Hope is contagious"
kmoorevs wrote:
TW, in 20+ years I've never seen a specifications document/plan. The closest thing might be an occasional UI mockup in a screen grab or worse, scribbled on a notepad. (or even worse, a screen grab of an image of scribbling on a notepad! :| )
The last major gig I had had a very good specification plan from which I was to code. As I got into the meat of the task, there were a few items that seemed to be very difficult to implement - i.e., would need to build a new UI control and access 3rd party stuff (YIKES) - that I was able to talk the program manager out of due to the difficulty. The app was meant to be used by the USA military out in the field, so I was able to explain how simplicity would be an asset.
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
I guess I was an old school architect. We typically wrote prototype code for how we wanted to interface with the system, then we went back and designed a system architecture that we could use for those things. We didn't do UML diagrams, or go crazy with patterns, although we had a few singletons. We did use streams between client and server, and that made adding compression a breeze, and later encrypted the compression a breeze. A few lines of code allowed the server to know if the client supported compression and/or encryption. And then it simply constructed the "best" stream object. The clients abilities were known, so it handled what was streamed back. The server was ALWAYS ahead of the client, so it did not have to worry, the old clients NEVER asked for encryption or compression. But I completely agree to a point. I see these complex designs and I wonder. Although I am pretty impressed with most of the RESTful style APIs. QuickBooks Online being the EXCEPTION. Imagine having a Memo Field on an invoice. Imagine Importing an Invoice, that Identifies the Memo field. Imagine NOW that they put that on the CUSTOMER STATEMENT and NOT on the Invoice. No option given to have it be on both or be on the Invoice. Worse, if you have a default Memo defined. And you do an import WITHOUT SETTING the Memo, it does NOT use the default value the way it would if a User Created the invoice. But you can use ANOTHER API call, and set the Invoice Memo Field after it is created. That kind of stuff drives me BONKERS. It's like the programmers have NEVER USED their own system! Google Tasks. Last I checked does not easily let you move a task from ONE task list to another. How swell... You have to copy all the fields, and create a new one, and then delete the other one. God forbid if they add a new field you forget to copy, or the field can only be set AFTER the task is created. LOL... I yearn for simpler days in some ways, and at the same time I pray things keep improving...